Tech

People with type 1 diabetes struggle with blood sugar control despite CGMs

Some continuous glucose monitoring (CGM) alarm features and settings may achieve better blood sugar control for people with type 1 diabetes, according to a study published in the Journal of the Endocrine Society.

The most common way to check blood sugar is the finger prick method. This test is done between 1-6 times per day and is difficult for most people. Using a CGM allows patients to check blood sugar automatically, even while they're sleeping. This frequent monitoring can lead to better outcomes when managing diabetes, but patients with type 1 still face challenges with avoiding high and low blood pressure daily.

“Managing type 1 diabetes is a constant battle between high and low blood sugar levels, and many patients using CGMs continue to struggle to find a balance,” said the study’s corresponding author, Yu Kuei Lin, M.D., of University of Michigan Medical School in Ann Arbor, Michigan. “Our study pioneeringly demonstrated that some CGM alarm features and settings may achieve better blood sugar control for patients with type 1 diabetes.”

In this study, researchers examined data from 95 patients with type 1 diabetes to better understand the associations between CGM alarm settings and blood sugar levels. They found different CGM blood sugar thresholds for high and low blood sugar alarms were associated with various hypo/hyperglycemic outcomes, and suggest adjustments to these thresholds could lead to better management of hypo- and hyperglycemia.

"Simple adjustments on the CGM alarm settings can inform patients about high or low blood sugar events early, so they can be head of time for treatments when needed," Lin said.

Credit: 
The Endocrine Society

CUHK Faculty of Engineering develops browser-based analysis framework observer

image: Professor Wei Meng.

Image: 
The Chinese University of Hong Kong

To investigate the problem of click interception, the research team led by Professor Wei Meng of the Department of Computer Science and Engineering, Faculty of Engineering, The Chinese University of Hong Kong (CUHK) developed a browser-based analysis framework - Observer, which is able to detect three different techniques for intercepting web user clicks. The research result has been published in USENIX Security Symposium 2019 (USENIX Security '19), one of the top academic conferences in computer security. The research team will release the source code of the framework publicly to help web browsers detect malicious click interceptions and alert users about the malicious behaviour to protect them from being exposed to malicious content.

A click is the prominent way that users interact with content on the World Wide Web (WWW). Attackers therefore aim to intercept genuine user clicks to either launch ad click frauds by fabricating ad click traffic, or to send malicious commands to another website on behalf of the user (e.g., to force the user to download malwares). Previous researches mainly considered one type of click interceptions in the cross-origin settings via iframes, i.e., clickjacking, that is usually launched by malicious first-party websites. This does not comprehensively represent various click interceptions that can be launched by third-party JavaScript code.

To address this research gap, Professor Wei Meng and his Ph.D. student Mingxue Zhang of the Department of Computer Science and Engineering developed an analysis framework - Observer based on the Google Chromium browser, to systematically record and analyse various click interceptions on the Web. Using Observer, they analysed Alexa top 250K websites, and detected 437 third-party scripts that intercept user clicks on 613 popular websites, which in total receive around 43 million visits on a daily basis. In particular, though click interception, these scripts could trick users into visiting 3,251 untrusted unique uniform resource locators (URLs) controlled by third parties. Over 36% of them were related to online advertising. Further, some click interception URLs led users to malicious content such as scamwares. This demonstrates that click interception has become an emerging threat to web users.

The research identified three categories of click interception techniques: (1) modifying the destination URL of hyperlinks to lead users to malicious websites upon clicks; (2) adding click event listeners to manipulate user clicks; (3) visual deception, for example, by creating web content that is visually similar to first-party content, or displaying transparent elements on top of the web page. The former will trick users into clicking third-party element, and the latter enables the transparent elements to capture all user clicks on first-party content. Consequently, the users can be led to a page controlled by the attackers.

It is acknowledged that web behaviour caused by third-party JavaScript code is difficult to record and analyse. Observer detects third-party click interceptions by extending the browser to collect the behaviour at runtime and thoroughly analysing the click-related behaviour. The system is of great significance in protecting web users from such security threats. Professor Wei Meng thinks the root cause of click interception might be the privilege abuse by third-party web developers, who intercept user clicks for monetisation via committing ad click fraud. He said, "We will make our implementation publicly available. The browser vendors can design defense mechanisms against click interception accordingly. For example, they can show security warnings to users to prevent them from accessing potentially malicious web pages. This can help build a more secure web ecosystem."

Credit: 
The Chinese University of Hong Kong

NIST's light-sensing camera may help detect extraterrestrial life, dark matter

video: NIST researcher Varun Verma explains how a new NIST camera, made of nanometer-scale wires, could efficiently capture light from atmospheres of extrasolar planets that possibly harbor life.

Image: 
NIST

Researchers at the National Institute of Standards and Technology (NIST) have made one of the highest-performance cameras ever composed of sensors that count single photons, or particles of light.

With more than 1,000 sensors, or pixels, NIST's camera may be useful in future space-based telescopes searching for chemical signs of life on other planets, and in new instruments designed to search for the elusive "dark matter" believed to constitute most of the "stuff" in the universe.

Described in Optics Express, the camera consists of sensors made from superconducting nanowires, which can detect single photons. They are among the best photon counters in terms of speed, efficiency, and range of color sensitivity. A NIST team used these detectors to demonstrate Einstein's "spooky action at a distance," for example.

The nanowire detectors also have the lowest dark count rates of any type of photon sensor, meaning they don't count false signals caused by noise rather than photons. This feature is especially useful for dark-matter searches and space-based astronomy. But cameras with more pixels and larger physical dimensions than previously available are required for these applications, and they also need to detect light at the far end of the infrared band, with longer wavelengths than currently practical.

NIST's camera is small in physical size, a square measuring 1.6 millimeters on a side, but packed with 1,024 sensors (32 columns by 32 rows) to make high-resolution images. The main challenge was to find a way to collate and obtain results from so many detectors without overheating. The researchers extended a "readout" architecture they previously demonstrated with a smaller camera of 64 sensors that adds up data from the rows and columns, a step toward meeting the requirements of the National Aeronautics and Space Administration (NASA).

"My primary motivation for making the camera is NASA's Origins Space Telescope project, which is looking into using these arrays for analyzing the chemical composition of planets orbiting stars outside of our solar system," NIST electronics engineer Varun Verma said. Each chemical element in the planet's atmosphere would absorb a unique set of colors, he pointed out.

"The idea is to look at the absorption spectra of light passing through the edge of an exoplanet's atmosphere as it transits in front of its parent star," Verma explained. "The absorption signatures tell you about the elements in the atmosphere, particularly those that might give rise to life, such as water, oxygen and carbon dioxide. The signatures for these elements are in the mid- to far-infrared spectrum, and large-area single-photon counting detector arrays don't yet exist for that region of the spectrum, so we received a small amount of funding from NASA to see if we could help solve that problem."

Verma and colleagues achieved high fabrication success, with 99.5% of the sensors working properly. But detector efficiency at the desired wavelength is low. Boosting efficiency is the next challenge. The researchers also hope to make even bigger cameras, perhaps with a million sensors.

Other applications are also possible. For example, the NIST cameras may help find dark matter. Researchers around the world have been unable to find so-called weakly interacting massive particles (WIMPs) and are considering looking for dark matter with lower energy and mass. Superconducting nanowire detectors offer promise for counting emissions of rare, low-energy dark matter and discriminating real signals from background noise.

The new camera was made in a complicated process at NIST's Microfabrication Facility in Boulder, Colorado. The detectors are fabricated on silicon wafers diced into chips. The nanowires, made of an alloy of tungsten and silicon, are about 3.5 millimeters long, 180 nanometers (nm) wide and 3 nm thick. The wiring is made of superconducting niobium.

The camera performance was measured by the Jet Propulsion Laboratory (JPL) at the California Institute of Technology in Pasadena, California. JPL has the necessary electronics due to its work on deep space optical communications.

Credit: 
National Institute of Standards and Technology (NIST)

Simultaneous measurement of biophysical properties and position of single cells in a microdevice

image: Schematic design of the electrical sensing region in the microfluidic impedance cytometry device. The lateral position of single particles or cells flowing through the N-shaped electrodes can be calculated from the electrical signal and the geometry relationship among the positions of the flowing particles, electrodes and microchannel.

Image: 
SUTD

Tracking the lateral position of single cells and particles plays an important role in evaluating the efficiency of microfluidic cell focusing, separation and sorting. Traditionally, the performance of microfluidic cell separation and sorting is evaluated either by analyzing the input and collected output samples requiring extra multiple steps of off-chip analysis or the use of expensive equipment (e.g., flow cytometry), or by detecting the lateral positions of cells using an expensive high-speed imaging setup with intricate image processing algorithms or laborious manual analysis. Hence, there is a great need to develop a simple approach for the lateral position measurement of flowing particles.

In this study, a Singapore University of Technology and Design (SUTD) research team led by Associate Professor Dr Ye Ai's developed a microfluidic impedance flow cytometry device for the lateral position measurement of single cells and particles with a novel N-shaped electrode design.

A differential current collected from N-shaped electrodes encodes the trajectory of flowing single particles. A simple analytical expression is derived for the measurement of the particle lateral position based on the relationship between the generating electrical current and the positions of the flowing particles, electrodes and microchannel, eliminating the usage of expensive high-speed camera and computationally intensive image processing or laborious manual analysis.

Principal investigator, Dr. Ai said: "Compared to previously reported impedance-based microfluidic devices for measuring the particle lateral position, we have achieved the highest measurement resolution, highest flow rate and smallest measured particle size (3.6 μm beads). On top of that, this method is more straightforward as the particle lateral position can be calculated directly from a simple analytical expression rather than using indexes, such as transit time and height of the signal peak or using linear mapping with calibration coefficients to transform the index (i.e., the relative difference of the signal peak magnitude) to the electrical estimates of the lateral position."

Credit: 
Singapore University of Technology and Design

25 years of learning to combat cervical cancer

image: Global cervical cancer statistics 2018: Estimates of incidence and mortality. (A Cancer Journal for Clinicians (2018). Volume 68, Issue: 6, Pages: 394-424. doi: 10.3322/caac.21492)

Image: 
(<em>A Cancer Journal for Clinicians</em> (2018). Volume 68, Issue: 6, Pages: 394-424. doi: 10.3322/caac.21492)

According to the World Health Organization, cervical cancer is the fourth most common type of cancer affecting women worldwide. Currently, early screenings of pre-cancerous tissues and vaccination have proven to be the most effective treatment strategies. However, the lack of such interventions in developing nations has led to its high occurrence. Among the South East Asian nations alone, India has the highest incidence rate of cervical cancer.

A recent paper from the lab of Prof. Sudhir Krishna at the National Centre for Biological Sciences, Bangalore, reviews the progress made in cervical cancer research over the past 25 years. This extensive coverage published in the journal of Experimental Cell Research highlights the role of a popular signalling molecule called Notch in human cervical cancer progression.

Cervical cancer is caused by the invasion of the human papillomavirus (HPV) in the female reproductive tissue called the cervix. Once the HPV makes itself at home in the cervical tissue (host), it establishes a healthy life cycle for itself and triggers the uncontrolled growth of cervical cells. It does so by employing genes and proteins that promote cell proliferation, also known as oncogenes. However, it cannot do this without activating cell growth-promoting factors within the host cell itself. The question then is what are the molecular factors within cells that complement the invasive capabilities of the virus and how are they regulated by the HPVs?

A gene quite often linked to various types of human cancers is a molecular player called Ras. However, in many instances, analysis of human cervical cancer tissues rarely showed the expression of mutant Ras. This meant that there were other molecules involved in cervical cancer progression.

In such a hunt for finding other molecules responsible for cervical cancer progression, Sudhir and his team of researchers identified Notch. Notch is a signalling protein that resides on the cell's membrane. It responds to specific information from neighbouring cells and triggers a signalling cascade that alters cell fate.

If one were to look at the literature on Notch signalling, we would be rather confounded to find that Notch functions both in tumour-promoting and tumour-suppressing conditions. However, the first clue that strongly seemed to nail Notch's role in cancer progression was the observation that many cervical cancer tissues showed signatures of increased Notch signalling. In addition, molecules known to enhance Notch signalling were found to be abundant, while the levels of those proteins that negatively regulated Notch pathway were found to be dampened. All this strongly seem to point to Notch as the primary culprit.

Progression of cervical cancer had always been thought to occur in stages. Firstly, there being an invasion by the HPV and then the local proliferation of cervical cells. Following this, a subset of cells was thought to acquire the capacity to move around the body and infect other tissues - a phenomenon known as metastasis. However, Sudhir and his team are of the opinion that cervical cancer progression may not always follow such a textbook route. In fact, their data seems to argue that local proliferation of cells is carried out independently by a certain subset of cells. Simultaneously, another group of infected cervical cells, carrying a different molecular imprint, may diverge into metastasis. Such parallel routes may thus make cancer progression increasingly complex in nature and probably, more rapid too.

"This review thus makes the point that a real understanding of a specific science question evolves over several decades, concomitant with changes in technology and growth in other related areas," concludes Sudhir. A systematic understanding of the intricate relationship between HPV, Notch signalling and the various molecular players affected within the cells hold the key to treating human cervical cancer at various stages of its progression. Towards this end, Sasikala, one of the authors who steered this review, is now working towards identifying important molecules that Notch signalling activates in human cervical cancer.

Credit: 
National Centre for Biological Sciences

Clean carbon nanotubes with superb properties

image: 1000 single nano-tube transistors on a chip

Image: 
Aalto university

Single-wall carbon nanotubes (SWCNT) have found many uses in electronics and new touch screen devices. Carbon nanotubes are sheets of one atom-thick layer of graphene rolled up seamlessly into different sizes and shapes. To be able to use them in commercial products like transparent transistors for phone screens, researchers need to be able to easily test nanotubes for their materials properties, and the new method helps with this.

Professor Esko I. Kauppinen's group at Aalto has years of experience in making carbon nanotubes for electronic applications. The team's unique method uses aerosols of metal catalysts and gasses containing carbon. This aerosol-based method allows the researchers to carefully control the nanotube structure directly.

Fabricating single carbon nanotube transistors is usually tedious. It often takes days from raw carbon nanotube material to transistors, and the devices were contaminated with processing chemicals, degrading their performance. However the new method makes it possible to fabricate hundreds of individual carbon nanotube devices within 3 hours, an over ten times increase in efficiency.

Most importantly, these fabricated devices do not contain degrading processing chemicals on their surface. These so-called ultra-clean devices have been previously even more difficult to manufacture than regular single carbon nanotube transistors.

'These clean devices help us to measure the intrinsic material properties. And the large number of devices helps to get a more systematic understanding of the nanomaterials, rather than just a few data points.' says Dr. Nan Wei, a postdoctoral researcher in the group.

This study shows the aerosol-based nanotubes are superb in terms of their electronic quality, their ability to conduct electricity is almost as good as theoretically possible for SWCNTs.

More importantly, the new method can also contribute to applied research. One example is that by studying the conducting property of SWCNT bundle transistors, scientists may find ways to improve performance of flexible conductive films. This could prove useful for designers trying to build flexible, smash-proof phones. Follow-up work by the groups in Japan and Finland is already underway.

Credit: 
Aalto University

Beyond the green revolution

There has been a substantial increase in food production over the last 50 years, but it has been accompanied by a narrowing in the diversity of cultivated crops. New research shows that diversifying crop production can make food supply more nutritious, reduce resource demand and greenhouse gas emissions, and enhance climate resilience without reducing calorie production or requiring more land.

The Green Revolution - or Third Agricultural Revolution - entailed a set of research technology transfer initiatives introduced between 1950 and the late 1960s. This markedly increased agricultural production across the globe, and particularly in the developing world, and promoted the use of high-yielding seed varieties, irrigation, fertilizers, and machinery, while emphasizing maximizing food calorie production, often at the expense of nutritional and environmental considerations. Since then, the diversity of cultivated crops has narrowed considerably, with many producers opting to shift away from more nutritious cereals to high-yielding crops like rice. This has in turn led to a triple burden of malnutrition, in which one in nine people in the world are undernourished, one in eight adults are obese, and one in five people are affected by some kind of micronutrient deficiency. According to the authors of a new study, strategies to enhance the sustainability of food systems require the quantification and assessment of tradeoffs and benefits across multiple dimensions.

In their paper published in the Proceedings of the National Academy of Sciences (PNAS), researchers from IIASA, and several institutions across the US and India, quantitatively assessed the outcomes of alternative production decisions across multiple objectives using India's rice dominated monsoon cereal production as an example, as India was one of the major beneficiaries of Green Revolution technologies.

Using a series of optimizations to maximize nutrient production (i.e., protein and iron), minimize greenhouse gas (GHG) emissions and resource use (i.e., water and energy), or maximize resilience to climate extremes, the researchers found that diversifying crop production in India would make the nation's food supply more nutritious, while reducing irrigation demand, energy use, and greenhouse gas emissions. The authors specifically recommend replacing some of the rice crops that is currently being cultivated in the country with nutritious coarse cereals like millets and sorghum, and argue that such diversification would also enhance the country's climate resilience without reducing calorie production or requiring more land. Researchers from IIASA contributed the design of the optimization model and the energy and GHG intensity assessments.

"To make agriculture more sustainable, it's important that we think beyond just increasing food supply and also find solutions that can benefit nutrition, farmers, and the environment. This study shows that there are real opportunities to do just that. India can sustainably enhance its food supply if farmers plant less rice and more nutritious and environmentally friendly crops such as finger millet, pearl millet, and sorghum," explains study lead author Kyle Davis, a postdoctoral research fellow at the Data Science Institute at Columbia University, New York.

The authors found that planting more coarse cereals could on average increase available protein by 1% to 5%; increase iron supply by between 5% and 49%; increase climate resilience (1% to 13% fewer calories would be lost during times of drought); and reduce GHG emissions by 2% to 13%. The diversification of crops would also decrease the demand for irrigation water by 3% to 21% and reduce energy use by 2% to 12%, while maintaining calorie production and using the same amount of cropland.

"One key insight from this study was that despite coarse grains having lower yields on average, there are enough regions where this is not the case. A non-trivial shift away from rice can therefore occur without reducing overall production," says study coauthor Narasimha Rao, a researcher in the IIASA Energy Program, who is also on the faculty of the Yale University School of Forestry and Environmental Studies.

The authors point out that the Indian Government is currently promoting the increased production and consumption of these nutri-cereals - efforts that they say will be important to protect farmers' livelihoods and increase the cultural acceptability of these grains. With nearly 200 million undernourished people in India, alongside widespread groundwater depletion and the need to adapt to climate change, increasing the supply of nutri-cereals may be an important part of improving the country's food security.

Credit: 
International Institute for Applied Systems Analysis

A remote control for everything small

image: Intensity distribution of an electric wave field that applies a well-defined torque onto the quadratic target.

Image: 
TU Wien

They are reminiscent of the "tractor beam" in Star Trek: special light beams can be used to manipulate molecules or small biological particles. Even viruses or cells can be captured or moved. However, these optical tweezers only work with objects in empty space or in transparent liquids. Any disturbing environment would deflect the light waves and destroy the effect. This is a problem, in particular with biological samples because they are usually embedded in a very complex environment.

But scientists at TU Wien (Vienna) have now shown how virtue can be made of necessity: A special calculation method was developed to determine the perfect wave form to manipulate small particles in the presence of a disordered environment. This makes it possible to hold, move or rotate individual particles inside a sample - even if they cannot be touched directly. The tailor-made light beam becomes a universal remote control for everything small. Microwave experiments have already demonstrated that the method works. The new optical tweezer technology has now been presented in the journal Nature Photonics.

Optical tweezers in disordered environments

"Using laser beams to manipulate matter is nothing unusual anymore," explains Prof. Stefan Rotter from the Institute for Theoretical Physics at TU Wien. In 1997, the Nobel Prize in Physics was awarded for laser beams that cool atoms by slowing them down. In 2018, another Physics Nobel Prize recognized the development of optical tweezers.

But light waves are sensitive: in a disordered, irregular environment, they can be deflected in a highly complicated way and scattered in all directions. A simple, plane light wave then becomes a complex, disordered wave pattern. This completely changes the way light interacts with a specific particle.

"However, this scattering effect can be compensated," says Michael Horodynski, first author of the paper. "We can calculate how the wave has to be shaped initially so that the irregularities of the disordered environment transform it exactly into the shape we want it to be. In this case, the light wave looks rather disordered and chaotic at first, but the disordered environment turns it into something ordered. Countless small disturbances, which would normally render the experiment impossible, are used to generate exactly the desired wave form, which then acts on a specific particle.

Calculating the optimal wave

To achieve this, the particle and its disordered environment are first illuminated with various waves and the way in which the waves are reflected is measured. This measurement is carried out twice in quick succession. "Let's assume that in the short time between the two measurements, the disordered environment remains the same, while the particle we want to manipulate changes slightly," says Stefan Rotter. "Let's think of a cell that moves, or simply sinks downwards a little bit. Then the light wave we send in is reflected a little bit differently in the two measurements." This tiny difference is crucial: With the new calculation method developed at TU Wien, it is possible to calculate the wave that has to be used to amplify or attenuate this particle movement.

"If the particle slowly sinks downwards, we can calculate a wave that prevents this sinking or lets the particle sink even faster," says Stefan Rotter. "If the particle rotates a little bit, we know which wave transmits the maximum angular momentum - we can then rotate the particle with a specially shaped light wave without ever touching it."

Successful experiments with microwaves

Kevin Pichler, also part of the research team at TU Wien, was able to put the calculation method into practice in the lab of project partners at the University of Nice (France): he used randomly arranged Teflon objects, which he irradiated with microwaves - and in this way he actually succeeded in generating exactly those waveforms which, due to the disorder of the system, produced the desired effect.

"The microwave experiment shows that our method works," reports Stefan Rotter. "But the real goal is to apply it not with microwaves but with visible light. This could open up completely new fields of applications for optical tweezers and, especially in biological research, would make it possible to control small particles in a way that was previously considered completely impossible."

Credit: 
Vienna University of Technology

Artificial intelligence algorithm can learn the laws of quantum mechanics

Deep machine learning method can predict molecular wave functions and electronic properties of molecules

This algorithm could drastically speed-up future simulation efforts in the design of drug molecules or new materials

Artificial Intelligence can be used to predict molecular wave functions and the electronic properties of molecules. This innovative AI method developed by a team of researchers at the University of Warwick, the Technical University of Berlin and the University of Luxembourg, could be used to speed-up the design of drug molecules or new materials.

Artificial Intelligence and machine learning algorithms are routinely used to predict our purchasing behaviour and to recognise our faces or handwriting. In scientific research, Artificial Intelligence is establishing itself as a crucial tool for scientific discovery.

In Chemistry AI has become instrumental in predicting the outcomes of experiments or simulations of quantum systems. To achieve this, AI needs to be able to systematically incorporate the fundamental laws of physics.

An interdisciplinary team of chemists, physicists, and computer scientists led by the University of Warwick, and including the Technical University of Berlin, and the University of Luxembourg have developed a deep machine learning algorithm that can predict the quantum states of molecules, so-called wave functions, which determine all properties of molecules.

The AI achieves this by learning to solve fundamental equations of quantum mechanics as shown in their paper 'Unifying machine learning and quantum chemistry with a deep neural network for molecular wavefunctions' published in Nature Communications.

Solving these equations in the conventional way requires massive high-performance computing resources (months of computing time) which is typically the bottleneck to the computational design of new purpose-built molecules for medical and industrial applications. The newly developed AI algorithm can supply accurate predictions within seconds on a laptop or mobile phone.

Dr. Reinhard Maurer from the Department of Chemistry at the University of Warwick comments:

"This has been a joint three year effort, which required computer science know-how to develop an artificial intelligence algorithm flexible enough to capture the shape and behaviour of wave functions, but also chemistry and physics know-how to process and represent quantum chemical data in a form that is manageable for the algorithm."

The team have been brought together during an interdisciplinary 3-month fellowship program at IPAM (UCLA) on the subject of machine learning in quantum physics.

Prof Dr Klaus Robert-Muller from the Institute of Software Engineering and Theoretical Computer Science at the Technical University of Berlin adds:

"This interdisciplinary work is an important progress as it shows that, AI methods can efficiently perform the most difficult aspects of quantum molecular simulations. Within the next few years, AI methods will establish themselves as essential part of the discovery process in computational chemistry and molecular physics."

Professor Dr Alexandre Tkatchenko from the Department of Physics and Materials Research at the University of Luxembourg concludes:

"This work enables a new level of compound design where both electronic and structural properties of a molecule can be tuned simultaneously to achieve desired application criteria."

Credit: 
University of Warwick

Trinity scientists engineer 'Venus flytrap' bio-sensors to snare pollutants

Scientists from Trinity College Dublin have created a suite of new biological sensors by chemically re-engineering pigments to act like tiny Venus flytraps. The sensors are able to detect and grab specific molecules, such as pollutants, and will soon have a host of important environmental, medical and security applications.

Porphyrins, a unique class of intensely coloured pigments - also known as the "pigments of life" - provide the key to this ground-breaking innovation.

The word porphyrin is derived from the Greek word porphura, meaning purple, and the first chapter detailing the medical-chemical history of porphyrins goes back to the days of Herodotus (circa 484 to 425 BC). This tale has been progressing ever since and is at the heart of Professor Mathias O. Senge's work at Trinity.

In living organisms, porphyrins play an important role in metabolism, with the most prominent examples being heme (the red blood cell pigment responsible for transporting oxygen) and chlorophyll (the green plant pigment responsible for harvesting light and driving photosynthesis).

In nature, the active versions of these molecules contain a variety of metals in their core, which gives rise to a set of unique properties.

The researchers at Trinity, under the supervision of Professor Mathias O. Senge, Chair of Organic Chemistry, chose a disruptive approach of exploring the metal-free version of porphyrins. Their work has created an entirely new range of molecular receptors.

By forcing porphyrin molecules to turn inside out, into the shape of a saddle, they were able to exploit the formerly inaccessible core of the system. Then, by introducing functional groups near the active centre they were able to catch small molecules - such as pharmaceutical or agricultural pollutants, for example pyrophosphates and sulphates - and then hold them in the receptor-like cavity.

Porphyrins are colour-intense compounds so when a target molecule is captured this results in the colour changing drastically. This underlines the value of porphyrins as bio-sensors because it is clear when they have successfully captured their targets.

Karolis Norvaiša, an Irish Research Council-funded PhD Researcher at Trinity, and first author of the study, said:

"These sensors are like Venus flytraps. If you bend the molecules out of shape, they resemble the opening leaves of a Venus flytrap and, if you look inside, there are short stiff hairs that act as triggers. When anything interacts with these hairs, the two lobes of the leaves snap shut."

"The peripheral groups of the porphyrin then selectively hold suitable target molecules in place within its core, creating a functional and selective binding pocket, in exactly the same way as the finger-like projections of Venus flytraps keep unfortunate target insects inside."

The discovery was recently published in the print version of the leading international journal Angewandte Chemie International Edition and is featured as a hot paper. It has also been selected as the journal's cover illustration.

The work highlights the beginning of an EU-wide H2020 FET-OPEN project called INITIO, which aims to detect and remove pollutants. The work was made possible by initial funding from Science Foundation Ireland and an August-Wilhelm Scheer guest professorship award for Professor Senge at the Technical University of Munich.

Professor Senge added:

"Gaining an understanding of the porphyrin core's interactions is an important milestone for artificial porphyrin-based enzyme-like catalysts. We will slowly but surely get to the point where we can realise and utilise the full potential of porphyrin-substrate interfaces to remove pollutants, monitor the state of the environment, process security threats, and deliver medical diagnostics."

Credit: 
Trinity College Dublin

Beyond Moore's Law: Taking transistor arrays into the third dimension

ANN ARBOR--Silicon integrated circuits, which are used in computer processors, are approaching the maximum feasible density of transistors on a single chip--at least, in two-dimensional arrays.

Now, a team of engineers at the University of Michigan have stacked a second layer of transistors directly atop a state-of-the-art silicon chip.

They propose that their design could remove the need for a second chip that converts between high- and-low voltage signals, which currently stands between the low-voltage processing chips and the higher-voltage user interfaces.

"Our approach can achieve better performance in a smaller, lighter package," said Becky Peterson, an associate professor of electrical engineering and computer science and project leader.

Moore's Law holds that computing power per dollar doubles roughly every two years. As silicon transistors have shrunk in size to become more affordable and power efficient, the voltages at which they operate have also fallen.

Higher voltages would damage the increasingly small transistors. Because of this, state-of-the-art processing chips aren't compatible with higher-voltage user interface components, such as touchpads and display drivers. These need to run at higher voltages to avoid effects such as false touch signals or too-low brightness settings.

"To solve this problem, we're integrating different types of devices with silicon circuits in 3D, and those devices allow you to do things that the silicon transistors can't do," Peterson said.

Because the second layer of transistors can handle higher voltages, they essentially give each silicon transistor its own interpreter for talking to the outside world. This gets around the current trade-off of using state-of-the-art processors with an extra chip to convert signals between the processor and interface devices--or using a lower-grade processor that runs at a higher voltage.

"This enables a more compact chip with more functionality than what is possible with only silicon," said Youngbae Son, the first author of the paper and recent doctoral graduate in electrical and computer engineering at U-M.

Peterson's team managed this by using a different kind of semiconductor, known as an amorphous metal oxide. To apply this semiconductor layer to the silicon chip without damaging it, they covered the chip with a solution containing zinc and tin and spun it to create an even coat.

Next, they baked the chip briefly to dry it. They repeated this process to make a layer of zinc-tin-oxide about 75 nanometers thick--about one-thousandth the thickness of a human hair. During a final bake, the metals bonded to oxygen in the air, creating a layer of zinc-tin-oxide.

The team used the zinc-tin-oxide film to make thin film transistors. These transistors could handle higher voltages than the silicon beneath. Then, the team tested the underlying silicon chip and confirmed that it still worked.

To make useful circuits with the silicon chip, the zinc-tin-oxide transistors needed to fully communicate with the underlying silicon transistors. The team accomplished this by adding two more circuit elements using the zinc-tin-oxide: a vertical thin film diode and a Schottky-gated transistor.

The two kinds of zinc-tin-oxide transistors are connected together to make an inverter, converting between the low voltage used by the silicon chip and the higher voltages used by other components. The diodes were used to convert wireless signals into useful DC power for the silicon transistors.

These demonstrations pave the way toward silicon integrated circuits that go beyond Moore's law, bringing the analog and digital advantages of oxide electronics to individual silicon transistors.

Credit: 
University of Michigan

Improving the odds for patients with heart pumps

New Haven, Conn. -- A new Yale study shows that some patients being treated for severe heart failure with a battery-operated pump saw significant improvement after additionally using neurohormonal blockade (NHB) drug therapy.

NHB therapy, which includes three broad categories of drugs, including ACE inhibitors, has long been the standard therapy for treating heart failure. Until now, however, NHB's effectiveness had not been extensively studied for heart failure patients using left ventricular assist devices (LVADs). The devices, which are surgically implanted, have become increasingly popular in recent years because there are few therapeutic options for end-stage patients other than heart transplants.

A new study published Nov. 18 in the journal JAMA Cardiology shows a clear association between the use of NHB therapy and increased survival and quality of life for patients with LVADs.

Patients who were taking any combination of the three major heart failure therapies that comprise NHB had a 56% survival rate, compared to 43.9% for patients who were not taking the medications, the researchers said.

The study, led by Nihar R. Desai, M.D., associate professor of medicine and investigator at the Yale Center for Outcomes Research & Evaluation (CORE), looked at more than 12,000 patients with LVADs in the Interagency Registry for Mechanically Assisted Circulatory Support, a North American registry, from 2008 until 2016. The analysis included patients from more than 170 medical centers in the United States and Canada.

"When we looked at the patients who were on the most intensive regimen of heart failure therapies, we found that they had one-third the risk of dying compared to those not on any of the heart failure therapies," said Megan McCullough, M.D., first author of the study.

One of the most surprising findings, McCullough and Desai said, was the inconsistent use of NHB by health care providers. Some providers, they noted, believe that patients with LVADs might be too sick to benefit from the therapies.

"This study shows that the same medicines that improve quality of life and outcomes for patients without LVADs can help patients with LVADs," Desai said. "A doctor caring for an LVAD patient, after that patient is stabilized, can start to think about them as having the potential to live for years to come. There are therapies to help achieve this."

Credit: 
Yale University

Robotic transplants safe for kidney disease patients with obesity

image: Dr. Mario Spaggiari (left) and Dr. Enrico Benedetti.

Image: 
UIC/ Jenny Fontaine

Researchers at the University of Illinois at Chicago report that among patients with obesity, robotic kidney transplants produce survival outcomes comparable to those seen among nonobese patients.

Their study, published in the American Journal of Transplantation, includes data collected over 10 years from more than 230 robotic-assisted kidney transplants in patients with obesity.

"Patients with obesity, a risk factor for poor surgical outcomes, have traditionally been considered ineligible for kidney transplants," said Dr. Mario Spaggiari, UIC assistant professor of surgery at the College of Medicine. "But advances in surgical care, including increasing proficiency and acceptance of robotic surgery, are making kidney transplants a safe option for more people."

Spaggiari said that robotic surgery helps to ameliorate adverse surgical events associated with obesity in open transplants, achieving a dramatic reduction of the risk of post-surgery wound infections, a critical factor in long-term success of the transplant.

In 2009, surgeons at UI Health, UIC's hospital and clinical health network, were among the first to offer robotic kidney transplants to patients with obesity.

"Our surgical program is focused on advancing care for everyone, including members of vulnerable communities who experience increased rates of various comorbidities, including obesity," said Dr. Enrico Benedetti, professor and Warren H. Cole Chair of Surgery. "Ten years of transplant experience shows us that obesity does not have to be a disqualifying factor in kidney transplants."

In the study, Spaggiari and Benedetti report one- and three-year patient survival rates of 98% and 95%, respectively, among patients with obesity. Only 17 of 239 patients (7.1%) developed graft failures and returned to dialysis, resulting in 93% three-year kidney graft survival.

Patients in the study cohort were a median age of 48 years with a median body mass index, or BMI, of 41. The majority of patients were black (53.1%) and Latino (24.7%).

Wound complications occurred in only nine patients (3.8%) and a surgical site infection occurred in only one patient (0.4%). While 88 patients (37.2%) were readmitted to the hospital within 30 days, only 10 (4.2%) of these readmissions were due to surgical complications.

These results are similar to those seen in nonobese patients across the U.S. when compared with a national database -- the United Network for Organ Sharing -- of transplants from the same period, January 2009 to December 2018.

"To our knowledge, this is the largest cohort to date of robotic kidney transplants and these findings tell us that kidney transplantation is a viable option for many people with obesity," Benedetti said.

"The patients who received transplants spent more than three-and-a-half years on dialysis before undergoing surgery, and that is just the median number," Spaggiari said. "Without surgery, these people would have had no choice but to remain on dialysis -- which can itself be a barrier to achieving an 'ideal' weight for transplant -- and accept the limitations it places on their quality of life. With surgery, they can get back to normal life, which is most important. They can also have increased chances of achieving other health-promoting behaviors, like exercise or weight loss."

Credit: 
University of Illinois Chicago

Research shows boredom is on the rise for adolescents, especially girls

image: Elizabeth Weybright

Image: 
WSU

"I'm so bored!" It's a typical complaint by teens in every era, but one that's growing more common for U.S. adolescents, especially girls.

New research at Washington State University has found that boredom is rising year after year for teens in 8th, 10th, and 12th grades, with greater increases for girls than boys.

"We were surprised to see that boredom is increasing at a more rapid pace for girls than boys across all grades," said Elizabeth Weybright, WSU researcher of adolescent development, who shared the findings in the Journal of Adolescent Health.

Collaborating with scientists John Schulenberg at the University of Michigan and Linda Caldwell at Pennsylvania State University, Weybright's project tracked a decade of adolescent responses to a question about boredom in the nationwide Monitoring the Future in-school survey.

Adolescents were asked to rate their response to the question "I am often bored," on a five-point scale. Weybright and her colleagues analyzed the results over time and across grades, between 2008, when the question was first asked, and 2017.

Detailed in "More bored today than yesterday? National trends in adolescent boredom from 2008-2017," the team's research revealed that boredom rose within and across grades for much of the last decade.

"Everybody experiences boredom from time to time, but many people don't realize it may be associated with depressive symptoms and risky behaviors, such as substance misuse," Weybright said. "I wanted to find out when adolescents are most likely to experience boredom."

Boredom rising since 2010

When comparing across grades, boredom appears to peak in 10th grade for boys and in 8th grade for girls.

However, looking across time with grade levels combined, boys' boredom levels rose 1.6 percent every year on average, while girls' boredom levels rose by 1.7 percent on average. In the 10th grade, girls' boredom level rose by about 2 percent every year. In every grade, girls' boredom levels showed steeper rises than boys.

"Historically, we saw a decline from 2008 to 2010 across all grades, but it wasn't significant," said Weybright. "Then, we see a significant increase from 2010 to 2017. Around 2010, there's a divergence for boys and girls. We see that boredom increases for boys and girls, but it increases a bit steeper and earlier for girls."

While Weybright's study doesn't explore the causes of rising boredom, she notes that boredom may be associated with sensation-seeking and depression, which are rising among U.S. teens. At the same time, digital media use has also been increasing, doubling for 12th graders from 2006 to 2012.

Within this same timeframe, other researchers have seen decreases in adolescents going out with friends and spending more time alone.

"Perhaps boredom is simply one more indicator of adolescent dissatisfaction with how their time is spent," Weybright stated in the paper.

"Adolescence is a time of change and growth," she said. "Teens want more independence, but may not have as much autonomy as they'd like in their school and home life. That creates situations where they're prone to boredom, and may have a hard time coping with being bored."

Considered alongside trends in mental health, depression, and social interaction, the team's boredom research provides a clearer picture about the changing world of adolescence.

"It also shows that we're going to need some kind of intervention," said Weybright, who called for more robust study of adolescent boredom.

"One of the challenges with this data set is that it includes different people every year," Weybright said. "This means I can't follow one person across time to find a causal link."

Future research should expand earlier into middle school, she suggested, and also take a closer, day-to-day look at how young people are experiencing boredom, and how it aligns with sleep, social interaction, and other factors in their lives.

Credit: 
Washington State University

Endangered whales react to environmental changes

image: This is a North Atlantic right whale breaching.

Image: 
Susan Parks

Ithaca, NY--Some "canaries" are 50 feet long, weigh 70 tons, and are nowhere near a coal mine. But the highly endangered North Atlantic right whale is sending the same kind of message about disruptive change in the environment by rapidly altering its use of important habitat areas off the New England coast. These findings are contained in a new study published in Global Change Biology by scientists at the Center for Conservation Bioacoustics (formerly the Bioacoustics Research Program) at the Cornell Lab of Ornithology and at Syracuse University. It's the longest running published study to continuously monitor the presence of any whale species at one location using sound."

"The change in right whale presence in Massachusetts Bay over the six years of the study is striking," says lead author Russ Charif, senior bioacoustician at the Center for Conservation Bioacoustics (CCB) at Cornell. "It's likely linked to rapid changes in conditions along the Atlantic Coast, especially in the Gulf of Maine which is warming faster than 99% of the rest of the world's ocean surface."

Charif points out that, starting in 2011, other studies began documenting dramatic changes in habitat use by right whales in other parts of the Gulf of Maine, which includes Massachusetts Bay and Cape Cod Bay. Massachusetts Bay is the gateway to Cape Cod Bay, one of the most important feeding areas for North Atlantic right whales, who congregate there in large numbers in late winter to early spring.

Nineteen marine autonomous recording units (MARUs) were deployed by CCB in Massachusetts Bay from July 2007 to April 2013, recording around-the-clock to detect the characteristic "up-call" of the North Atlantic right whale. Analysis of 47,000 hours of recordings by computer detection systems and human analysts found that in all but one of the study years detection of right whale calls kept increasing.

"During the six years of the study, our detection rates doubled during the winter-spring months," says study co-author Aaron Rice, principal ecologist with CCB. "During the summer-fall months the rate of detection for right whales had increased six-fold by the end of the study period, rising from 2% to 13% of recorded hours."

The scientists found right whales were present to varying degrees all year round in Massachusetts Bay, with implications for conservation efforts.

"There are seasonal conservation measures that kick in based on our historical understanding of where and when right whales are most often congregating, including Massachusetts Bay," Rice explains. "But the old patterns have changed and whales are showing up in areas where there are no protections in place to reduce the likelihood of ship strikes or fishing gear entanglements."

Entanglements and ship strikes remain the biggest threats to right whales with unknown cumulative effects from changing water temperatures, rising ocean noise pollution, and other stressors. The increasing use of Massachusetts Bay occurred even as the overall right whale population declined. Latest estimates peg the population at about 400 animals with only 95 of them females of reproductive age.

"Our study data end in 2013 and conditions may have changed even more since then," says Charif. "We need to do more of these long-term studies if we're to have any hope of understanding how right whale habitat is changing because of human activities and before it's too late for the species to survive."

Credit: 
Cornell University