Culture

Artificial intelligence to improve resolution of brain magnetic resonance imaging

video: Researchers of the ICAI Group -- Computational Intelligence and Image Analysis -- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence.

Image: 
University of Malaga

Researchers of the ICAI Group -Computational Intelligence and Image Analysis- of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence.

This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network -a model that is based on the functioning of the human brain- that "learns" this process.

"Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain", explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing.

Published in the scientific journal Neurocomputing, this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer", explains Thurnhofer.

According to the experts, the results will enable specialists to identify brain-related pathologies, like physical injuries, cancer or language disorders, among others, with increased accuracy and definition, because image details are thinner, thus avoiding the performance of additional tests when diagnoses are uncertain.

Nowadays, the ICAI Group of the UMA, led by Professor Ezequiel López, co-author of this study, is a benchmark for neurocomputing, computational learning and artificial intelligence. The Professors of the Department of Computer Science and Programming Languages Enrique Domínguez and Rafael Luque, as well as researcher Núria Roé-Vellvé, have also participated in this study.

Credit: 
University of Malaga

Human-caused biodiversity decline started millions of years ago

image: Leopard, by Hans Ring, Naturfotograferna.

Image: 
Hans Ring, Naturfotograferna

The human-caused biodiversity decline started much earlier than researchers used to believe. According to a new study published in the scientific journal Ecology Letters the process was not started by our own species but by some of our ancestors.

The work was done by an international team of scientists from Sweden, Switzerland and the United Kingdom.

The researchers point out in the study that the ongoing biological diversity crisis is not a new phenomenon, but represents an acceleration of a process that human ancestors began millions of years ago.

"The extinctions that we see in the fossils are often explained as the results of climatic changes but the changes in Africa within the last few million years were relative minor and our analyses show that climatic changes were not the main cause of the observed extinctions," explains Søren Faurby, researcher at Gothenburg University and the main author of the study.

"Our analyzes show that the best explanation for the extinction of carnivores in East Africa is instead that they are caused by direct competition for food with our extinct ancestors," adds Daniele Silvestro, computational biologist and co-author of the study.

Carnivores disappeared

Our ancestors have been common throughout eastern Africa for several million years and during this time there were multiple extinctions according to Lars Werdelin, co-author and expert on African fossils.

"By investigating the African fossils, we can see a drastic reduction in the number of large carnivores, a decrease that started about 4 million years ago. About the same time, our ancestors may have started using a new technology to get food called kleptoparasitism," he explains.

Kleptoparasitism means stealing recently killed animals from other predators. For example, when a lion steals a dead antelope from a cheetah.

The researchers are now proposing, based on fossil evidence, that human ancestors stole recently killed animals from other predators. This would lead to starvation of the individual animals and over time to extinction of their entire species.

"This may be the reason why most large carnivores in Africa have developed strategies to defend their prey. For example, by picking up the prey in a tree that we see leopards doing. Other carnivores have instead evolved social behavior as we see in lions, who among other things work together to defend their prey," explains Søren Faurby

Humans today affect the world and the species that live in it more than ever before.

"But this does not mean that we previously lived in harmony with nature. Monopolization of resources is a skill we and our ancestors have had for millions of years, but only now are we able to understand and change our behavior and strive for a sustainable future. 'If you are very strong, you must also be very kind'," concludes Søren Faurby and quotes Astrid Lindgrens book about Pippi Longstocking.

Credit: 
University of Gothenburg

The core of massive dying galaxies already formed 1.5 billion years after the Big Bang

image: The red galaxy at the center is a dying galaxy at 12 billion years ago. Astronomers measured the motion of stars in the galaxy and found that the core of the galaxy is nearly fully formed.

Image: 
NAOJ/M. Tanaka

Astrophysics, Galaxies: The most distant dying galaxy discovered so far, more massive than our Milky Way -- with more than a trillion stars -- has revealed that the 'cores' of these systems had formed already 1.5 billion years after the Big Bang, about 1 billion years earlier than previous measurements revealed. The discovery will add to our knowledge on the formation of the Universe more generally, and may cause the computer models astronomers use, one of the most fundamental tools, to be revised. The result was obtained in close collaboration with Masayuki Tanaka and his colleagues at the National Observatory of Japan is now published in two works in the Astrophysical Journal Letters and the Astrophysical Journal.

What is a "dead" galaxy?

Galaxies are broadly categorized as dead or alive: dead galaxies are no longer forming stars, while alive galaxies are still bright with star formation activity. A 'quenching' galaxy is a galaxy in the process of dying -- meaning its star formation is significantly suppressed. Quenching galaxies are not as bright as fully alive galaxies, but they are not as dark as dead galaxies. Researchers use this spectrum of brightness as the first line of identification when observing galaxies in the Universe.

The farthest dying galaxy discovered so far reveals remarkable maturity

A team of researchers of the Cosmic Dawn Center at the Niels Bohr Institute and the National Observatory of Japan recently discovered a massive galaxy dying already 1.5 billion years after the Big Bang, the most distant of its kind. "Moreover, we found that its core seems already fully formed at that time", says Masayuki Tanaka, the author of the letter. "This result pairs up with the fact that, when these dying gigantic systems were still alive and forming stars, they might have not been that extreme compared with the average population of galaxies", adds Francesco Valentino, assistant professor at the Cosmic Dawn Center at the Niels Bohr Institute and author of an article on the past history of dead galaxies appeared in the Astrophysical Journal.

Why do galaxies die? - One of the biggest and still unanswered questions in astrophysics

"The suppressed star formation tells us that a galaxy is dying, sadly, but that is exactly the kind of galaxy we want to study in detail to understand why it dies", continues Valentino. One of the biggest questions that astrophysics still has not answered is how a galaxy goes from being star-forming to being dead. For instance, the Milky Way is still alive and slowly forming new stars, but not too far away (in astronomical terms), the central galaxy of the Virgo cluster - M87 - is dead and completely different. Why is that? "It might have to do with the presence of gigantic and active black hole at the center of galaxies like M87" Valentino says.

Earth based telescopes find extremes - but astronomers look for normality

One of the problems in observing galaxies in this much detail is that the telescopes available now on Earth are generally able to find only the most extreme systems. However, the key to describe the history of the Universe is held by the vastly more numerous population of normal objects. "Since we are trying hard to discover this normality, the current observational limitations are an obstacle that has to be overcome."

The James Webb Telescope (JWST) represents hope for better data material in the near future

The new James Webb Space Telescope, scheduled for launch in 2021, will be able to provide the astronomers with data in a level of detail that should be able to map exactly this "normality". The methods developed in close collaboration between the Japanese team and the team at the Niels Bohr Institute have already proven to be successful, given the recent result. "This is significant, because it will enable us to look for the most promising galaxies from the start, when JWST gives us access to much higher quality data" Francesco Valentino explains.

Combining observations with the tool - the computer models of the Universe

What has been found observationally is not too far away from what the most recent models predict. "Until very recently, we did not have many observations to compare with the models. However, the situation is in rapid evolution, and with JWST we will have valuable larger samples of ``normal'' galaxies in a few years. The more galaxies we can study, the better we are able to understand the properties or situations leading to a certain state - if the galaxy is alive, quenching or dead. It is basically a question of writing the history of the Universe correctly, and in greater and greater detail. At the same time, we are tuning the computer models to take our observations into account, which will be a huge improvement, not just for our branch of work, but for astronomy in general" Francesco Valentino explains.

Credit: 
University of Copenhagen

Violence and adversity in early life can alter the brain

Childhood adversity is a significant problem in the US, particularly for children growing up in poverty. Those who experience poverty have a much higher risk of being exposed to violence and suffering from a lack of social support, which can have long-term consequences including higher rates of diabetes, cancer, and other diseases.

People exposed to childhood adversity may also be more likely to have brain changes in adolescence that indicate an altered response to threat, according to a new study by University of Michigan's Christopher Monk and Leigh Goetschius, and others. However, social supports may act as a buffer and reduce the negative effects of early-life stress.

The researchers analyzed data collected from 177 youth aged 15-17 who had taken part in a study that had collected data since the participants since birth. Around 70 percent of the participants studied were African-American and almost half lived below the poverty line.

The researchers scanned the brains of the participants with MRI, focusing on the white matter connectivity between several key areas: the amygdala, which is known to play a role in fear and emotion-processing, and specific regions of the prefrontal cortex (PFC). Earlier work by this research team established that reduced connectivity between the two brain regions is linked to a heightened response to threats by the amygdala.

The scans suggest a link between violence exposure and social deprivation in childhood. When the children in the study experienced more violence (abuse, exposure to intimate partner violence, or neighborhood violence) and social deprivation (child neglect, lack of neighborhood cohesion, and a lack of maternal support), the researchers observed reduced connectivity between the amygdala and the PFC in adolescence.

Neither variable was on its own linked to brain changes. When a child experienced violence but also had social support, the reduced connectivity wasn't evident. The same was true when a child experienced social deprivation but no violence. "The implication is that social deprivation may exacerbate the effects of childhood violence exposure when it comes to these white matter connections. Social support, on the other hand, may act as a buffer," says Monk.

The researchers were surprised to find no link between brain changes and mental health issues such as depression or anxiety. But because mental health issues often arise during the transition from adolescence to one's 20s, they plan to follow up with the study participants to track mental health and determine whether the associations between violence exposure, social deprivation, and brain changes persist.

Credit: 
American College of Neuropsychopharmacology

It takes more than two to tango: Microbial communities influence animal sex and reproduction

image: This sperm cell carries a whole community of bacteria. What will their impact be?

Image: 
Melissah Rowe

It is an awkward idea, but a couple's ability to have kids may partly depend on who else is present. The reproductive tracts of males and females contain whole communities of micro-organisms. These microbes can have considerable impact on (animal) fertility and reproduction, as shown by Melissah Rowe, from the Netherlands Institute of Ecology (NIOO-KNAW), and co-authors this week with an extensive overview in Trends in Ecology & Evolution. It may even lead to new species.

It appears to be such an intimate moment, with only the two of you... But you are not actually alone. In our reproductive tracts and in, on, and surrounding egg and sperm cells, lives a whole community of micro-organisms including bacteria, fungi and viruses. These microbiomes may influence our sexual health and fertility far more than we thought, sometimes in good ways and sometimes in bad ways. And this isn't just true for humans, but for all animals and even plants.

"We've all heard about the skin or gut microbiome and how these can affect our lives," explains evolutionary ecologist Melissah Rowe. "Well, guess what, there's more! We have a reproductive microbiome as well."

Conflicting interests

This is not just about sexually transmittable diseases. "Microbes appear to influence fertility, reproduction, and the evolution of animal species in so many ways," says Rowe. Sperm quality, mate choice, sexual health, success at producing offspring, the balance between female and male mating interests, general health, and even the origin of new species. "And yet, almost nobody is studying this, especially in non-human animals."

The ecology and evolution of reproductive biology and behaviour in animals, especially birds, is Rowe's field of research. Recently, she has started as a scientist in the Department of Animal Ecology with the Netherlands Institute of Ecology. She is interested in the impact of the reproductive microbiome. Together with colleagues from the University of Oslo (Norway), Oxford and Exeter (United Kingdom), Rowe composed an overview of all available scientific data.

Bacteria as jury

A couple of examples. Men with large amounts of certain bacteria in their semen are more likely to be infertile. Female bedbugs ramp up their immunological defences ahead of mating, as males will pierce their abdomen with their genitalia during mating. The resulting infections from bacteria transmitted via the genitalia can be fatal. In mallard ducks, males with a more colourful bill produce semen that is better able to kill bacteria, thereby possibly influencing the females' partner choice for a 'safe' male.

Rowe: "I think that reproductive microbiomes may be an important, yet relatively overlooked, evolutionary force." Natural selection, but with bacteria and other microbes as the 'jury' and sometimes the 'executioner'.

As this research field is new and unexplored, many questions are still waiting to be answered. For example, are the positive or negative effects of the microbiome caused by specific species or is it the composition of the whole microbial community that is important? Do these reproductive microbiomes evolve differently in females compared to males? And can knowledge of these microbial communities improve our success at breeding endangered species, and reintroducing them back into the wild? "Let's find out."

Credit: 
Netherlands Institute of Ecology (NIOO-KNAW)

The way you dance is unique, and computers can tell it's you

Nearly everyone responds to music with movement, whether through subtle toe-tapping or an all-out boogie. A recent discovery shows that our dance style is almost always the same, regardless of the type of music, and a computer can identify the dancer with astounding accuracy.

Studying how people move to music is a powerful tool for researchers looking to understand how and why music affects us the way it does. Over the last few years, researchers at the Centre for Interdisciplinary Music Research at the University of Jyväskylä in Finland have used motion capture technology--the same kind used in Hollywood--to learn that your dance moves say a lot about you, such as how extroverted or neurotic you are, what mood you happen to be in, and even how much you empathize with other people.

Recently, however, they discovered something that surprised them. "We actually weren't looking for this result, as we set out to study something completely different," explains Dr. Emily Carlson, the first author of the study. "Our original idea was to see if we could use machine learning to identify which genre of music our participants were dancing to, based on their movements."

The 73 participants in the study were motion captured dancing to eight different genres: Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap. The only instruction they received was to listen to the music and move any way that felt natural. "We think it's important to study phenomena as they occur in the real world, which is why we employ a naturalistic research paradigm," says Professor Petri Toiviainen, the senior author of the study.

The researchers analysed participants' movements using machine learning, trying to distinguish between the musical genres. Unfortunately, their computer algorithm was able to identify the correct genre less that 30% of the time. They were shocked to discover, however, that the computer could correctly identify which of the 73 individuals was dancing 94% of the time. Left to chance (that is, if the computer had simply guessed without any information to go on), the expected accuracy would be less than 2%. "It seems as though a person's dance movements are a kind of fingerprint," says Dr. Pasi Saari, co-author of the study and data analyst. "Each person has a unique movement signature that stays the same no matter what kind of music is playing."

Some genres, however, had more effect on individual dance movements than others. The computer was less accurate in identifying individuals when they were dancing to Metal music. "There is a strong cultural association between Metal and certain types of movement, like headbanging," Emily Carlson says. "It's probable that Metal caused more dancers to move in similar ways, making it harder to tell them apart."

Does this mean that face-recognition software will soon be joined by dance-recognition software? "We're less interested in applications like surveillance than in what these results tell us about human musicality," Carlson explains. "We have a lot of new questions to ask, like whether our movement signatures stay the same across our lifespan, whether we can detect differences between cultures based on these movement signatures, and how well humans are able to recognize individuals from their dance movements compared to computers. Most research raises more questions than answers," she concludes, "and this study is no exception."

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

3D printing with applications in the pharmaceutical industry

image: This achievement will have applications in the pharmaceutical industry, such as in the preparation of biocompatible biosensors based in gold, which have already been shown to be effective in the detection of carcinogenic cells and tumor biomarkers.

Image: 
Universidad de Sevilla

University of Seville researchers, in collaboration with the University of Nottingham, have managed to create the first image of nanoparticles of stabilised gold with biodegradable and biocompatible systems that have been obtained with 3D-printng techniques. The image chosen for this test was the logo of the University of Seville.

This achievement will have applications in the pharmaceutical industry, such as in the preparation of biocompatible biosensors based in gold, which have already been shown to be effective in the detection of carcinogenic cells and tumour biomarkers. In recent years, additive manufacturing, also commonly known as 3D printing, has been recognised as the ideal technology for applications that require intricate geometries or personalisation. Its manufacturing based on layers will reduce general small-batch manufacturing costs in comparison with traditional production methods. This has caught the attention of the pharmaceutical industry, which has seen a gateway to the total personalisation of treatment in this technology.

The research was centred on the technique called inkjet printing. This offers advantages such as its high resolution and the possibility it offers of being able to print more than one material during the same printing process. Using this technique, the researchers have proposed the manufacturing of systems that could potentially be used as personalised biosensors based on the conductivity and biocompatibility of gold.

Currently, existing gold inks for Inkjet Printing are based in nanoparticles of this metal, but they are highly unstable, as they bind together easily and are difficult to print. For that reason, the development of stale gold inks that are easy to print with has been invaluable.

The team was led by Ana Alcudia Cruz, of the Department of Organic and Pharmaceutical Chemistry from the Pharmacy faculty at the University of Seville, in collaboration of the group led by Rafael Prado Gotor of the Department of Physical Chemistry, from the Chemistry Faculty at the University of Seville, and Ricky Wilman, from the University of Nottingham (United Kingdom). For the first time, it managed to use polymers (polyurethanes) with a comb structure, which they themselves developed, to generate tiny gold nanoparticles with extremely high stability that was tested over time.

To that end, various polymers were prepared from arabinose, a sugar that is easily acquired in nature and that gives the material developed total biocompatibility and biodegradability, so avoiding polluting residues generated by the traditional polymers that are oil-derived.

For the first time this type of polymer has been used for the preparation of gold nanoparticles. These nanoparticles, obtained from three different chemically functionalised polymers, proved to be sufficiently small (maximum 10nm) to be printed in Inkjet printing and were stable for a period of at least six months. Once the printability of each ink was tested, that which showed the best balance of properties was selected and used to print the logo of our university. In the image, obtained by TOF-SIMS, the gold can be observed (in yellow), which forms the outline of the logo, on a polymer background (in blue). This is the first image of gold nanoparticles stabilised with biodegradable and biocompatible systems that has been obtained with 3D-printng techniques.

Credit: 
University of Seville

Programmable nests for cells

image: Bacteria cells (red) on a programmable composite of silica nanoparticles (yellow) and carbon nanotubes (blue).

Image: 
(Photo: Niemeyer-Lab, KIT)

Using DNA, smallest silica particles, and carbon nanotubes, researchers of Karlsruhe Institute of Technology (KIT) developed novel programmable materials. These nanocomposites can be tailored to various applications and programmed to degrade quickly and gently. For medical applications, they can create environments in which human stem cells can settle down and develop further. Additionally, they are suited for the setup of biohybrid systems to produce power, for instance. The results are presented in Nature Communications and on the bioRxiv platform.

Stem cells are cultivated for fundamental research and development of effective therapies against severe diseases, i.e. to replace damaged tissue, for instance. However, stem cells will only form healthy tissue in an adequate environment. For the formation of three-dimensional tissue structures, materials are needed, which support cell functions by perfect elasticity. New programmable materials suited for use as substrates in biomedical applications have now been developed by the group of Professor Christof M. Niemeyer of the Institute for Biological Interfaces 1 - Biomolecular Micro- and Nanostructures (IBG 1) of KIT, together with colleagues from the Institute of Mechanical Process Engineering and Mechanics, the Zoological Institute, and the Institute of Functional Interfaces of KIT. These materials can be used among others to create environments, in which human stem cells can settle down and further develop.

As reported by the researchers in Nature Communications, the new materials consist of DNA, smallest silica particles, and carbon nanotubes. "These composites are produced by a biochemical reaction and their properties can be adjusted by varying the amounts of the individual constituents," Christof M. Niemeyer explains. In addition, the nanocomposites can be programmed for rapid and gentle degradation and release of the cells grown inside, which can then be used for further experiments.

New Materials for Biohybrid Systems

According to another publication by the IBG 1 team on the bioRxiv bioscience platform, the new nanocomposites can also be used for construction of programmable biohybrid systems. "Use of living microorganisms integrated within electrochemical devices is an expanding field of research," says Professor Johannes Gescher from the Institute for Applied Biosciences (IAB) of KIT, who was involved in this study. "It is possible to produce microbial fuel cells, microbial biosensors, or microbial bioreactors in this way." The biohybrid system constructed by KIT researchers contains the bacterium Shewanella oneidensis. It is exoelectrogenic, which means that when organic substance is degraded under the lack of oxygen, an electric current is produced. When Shewanella oneidensis is cultivated in the nanocomposites developed by KIT, it populates the matrix of the composite, whereas the non-exoelectrogenic Escherichia coli bacterium remains on its surface. The Shewanella-containing composite remains stable for several days. Future work will be aimed at opening up new bioengineering applications of the new materials.

Credit: 
Karlsruher Institut für Technologie (KIT)

Not all of nature's layered structures are tough as animal shells and antlers, study finds

image: The anchor spicules that hold the sponge species Euplectella aspergillum to the ocean floor have an intricately layered internal structure. Similar layered structures are known to increase the toughness of materials like bone and nacre. But this new research finds that the layering in the spicules does little to enhance toughness. The research could help to avoid "naive biomimicry," the researchers say.

Image: 
Kesari Lab / Brown University

PROVIDENCE, R.I. [Brown University] -- Nacre -- the iridescent part of mollusk shells -- is a poster child for biologically inspired design. Despite being made of brittle chalk, the intricately layered microstructure of nacre gives it a remarkable ability to resist the spread of cracks, a material property known as toughness.

Engineers looking to design tougher materials have long sought to mimic this kind of natural layering, which is also found in conch shells, deer antlers and elsewhere. But a new study by Brown University researchers serves as a caution: Not all layered structures are so tough.

The study, published in Nature Communications, tested another layered microstructure renowned for its physical properties -- the anchor spicules of a sea sponge called Euplectella aspergillum. The spicules are tiny filaments of layered glass that hold the sponges to the sea floor. The layered structure of the spicules is often compared to that of nacre, the researchers say, and it's been assumed that the spicule structure similarly enhances toughness. This new study finds otherwise.

"Despite the similarities between the architectures of nacre and Euplectella spicules, we found that the spicule's architecture does relatively little in terms of enhancing its toughness, contrary to a long-held assumption," said Max Monn, a recently graduated Ph.D. student at Brown and a study coauthor.

For the study, the researchers compared the toughness of Euplectella spicules to those of another sponge species, Tethya aurantia. Tethya spicules have a similar chemical composition to Euplectella spicules but lack the layered structure. To test toughness, the team put tiny notches in the spicules and then bent them. By measuring the energy consumed when cracks propagated from the notches under bending strain, the researchers could quantify the toughness of both types of spicules.

The experiments showed very little difference in toughness between the two spicules, which suggests that Euplectella's layering doesn't provide much of a toughness enhancement. Using computer modeling, the researchers were able to look deeper into why layering enhances toughness in some materials and not others. The models showed that the curvature of the layering in cylindrical spicules seems to turn off the toughness enhancement of layered structures. Flat layers, like those found in nacre, seem to prevent cracks from spreading from one layer to the next, the researchers say. But in materials with curved layers like the Euplectella spicules, cracks are able to jump from layer to layer rather than being stopped between the layers.

The findings reveal a previously unknown relationship between curvature and toughness in layered materials and have implications for the design of bio-inspired composite materials, says Haneesh Kesari, an assistant professor in Brown's School of Engineering and the paper's senior author.

"Specifically, it shows that if you adopt a layered architecture in order to enhance the toughness of a material, you should be careful of areas that require the layers to be curved," Kesari said. "Our measurements of the spicules and results from our computational model show that curved layers don't provide the same magnitude of toughness enhancements as when layers are flat."

The findings don't mean that the layered structure of Euplectella spicules isn't interesting. Previous work from Kesari's lab has shown that the layered structure seems to vastly increase the spicules' bending strength -- to withstand large bending curvatures before failing. But bending strength and toughness are very different mechanical properties, and helping to dispel the idea that layering always enhances toughness is a useful insight for bio-inspired design in general, the researchers say.

"Our study indicates that not all layered architectures provide significant toughness enhancement," said Sayaka Kochiyama, a Brown graduate student and study coauthor. "That better understanding of structure-property relationship is necessary to avoid naive biomimicry."

Credit: 
Brown University

Researchers find that cookies increase ad revenue for online publishers

CATONSVILLE, MD, January 14, 2020 - How long has it been since you logged onto a Web site and you were prompted to decide whether to opt out of "cookies" that the site told you will enhance your online experience? Minutes? Hours?

While you may be familiar with the term, you may not completely know what a cookie is or what it does. A computer "cookie," also known as a web cookie, Internet cookie or browser cookie, represents data packets that are sent to your computer to help a website track your visits and activity. As a result, the site is better able to track items in your shopping cart when browsing an ecommerce site, or personalize your user experience on the website so that you are more likely to see content and ads you want to see.

New research has explored the real value of the cookie to websites, advertisers, and found that cookies represent higher revenue to online publishers. According to the study, there is a 52 percent reduction in advertising revenue to publishers when cookies are eliminated through Internet user opt-out protocols. On the other hand, when cookies are present, publishers' ad pricing doubles.

The study, to be published in the January edition of the INFORMS journal Marketing Science, is titled "Consumer Privacy Choice in Online Advertising: Who opts out and at what cost to industry?" It is authored by Garrett Johnson of Questrom School of Business at Boston University; Scott Shriver of the Leeds School of Business at the University of Colorado; and Shaoyin Du of the Simon Business School at the University of Rochester.

According to the study authors, while most Americans decide not to opt-out of online advertising, 0.23 percent of American online ad impressions arise from users who decide to opt out of online ads. These users, in effect, have opted out of the use of cookies to track their online navigation of a particular site. This group was the focus of the study's research to determine the impact of cookie removal on publishers.

In 2010, the American advertising industry decided to self-regulate by implementing its AdChoices program. This is where consumers are given the option to opt out of online advertising based on users' behavior, simply by clicking the overlaid "AdChoice" icons on ads.

"In addition to finding that only a small percentage of Americans actually decided to opt out of online ads, one of our more important findings was that opt-out user ads tend to fetch 52 percent less revenue on the transaction than do comparable ads for users who allow behavior targeting, or opt in," said Johnson.

The study authors calculate that the inability to behaviorally target opt-out users results in a loss of roughly $8.58 in ad spending per American opt-out consumer. This cost is covered by publishers and the AdChoices exchange.

For context, while the American advertising industry maintains an opt-out system, European regulators favor an opt-in policy through the General Data Protection Regulation (GDPR) which requires users to provide consent before they see an ad targeting them.

"This study provides the first evidence of the adoption rate of AdChoice, which is 0.23 percent of American impressions," said Shriver. "More specifically, we were able to uncover a privacy paradox. Consumers' stated preferences overstate actual preference measures they take to assure their privacy. Multiple surveys show that about two-thirds of American consumers oppose online behavioral advertising, and 20 percent even claim to have opted out using AdChoices. Still, actual opt-out rates are much lower."

"Though few users tend to opt out, we note that certain types of users are more likely to opt out, and that has certain consequences for the advertising industry," said Du. "We find that opt-out rates are higher among users who install non-default browsers, such as Firefox and Chrome, which tells us that opt-out users are likely more technologically sophisticated. We also note substantial variation in opt-out rates by region by city and state and by certain demographics."

Credit: 
Institute for Operations Research and the Management Sciences

Chemists allow boron atoms to migrate

Organic molecules with atoms of the semi-metal boron are among the most important building blocks for synthesis products that are needed to produce drugs and agricultural chemicals. However, during the usual chemical reactions used in industry, the valuable boron unit, which can replace another atom in a molecule, is often lost. Chemists at the University of Münster have now succeeded in significantly expanding the range of applications of commercially and industrially used boron compounds, so-called allylboronic esters. The study has been published in the scientific journal "Chem".

Since so-called boronic acid derivatives are very versatile and reliably applicable in their variants, chemists often use them to build up important carbon-carbon couplings (C-C couplings). The most important process using boronic acid derivatives is the Nobel Prize-winning Suzuki-Miyaura coupling. Also widely used in synthesis are the so-called allylboronic esters, which also belong to this class of boron compounds.

In their current study, the chemists headed by Prof. Armido Studer of the Organic Chemical Institute at Münster University are now presenting C-C couplings in which the boron unit from the starting material is retained in the product. The scientists use methods of so-called radical chemistry for this purpose. The principle works like this: The boron unit "migrates" from one carbon atom to the neighbouring atom, thus enabling a second C-C coupling.

Using this method, the chemists can gradually incorporate individual building blocks of molecules at different points in the basic structure. "Since the boron unit remains in the product molecule, i.e. is 'preserved', it can be replaced by another molecular unit, which can be done using the entire spectrum of industrial methods. The commercially available allylboronic esters thus appear in a new guise," says Armido Studer, the lead author of the study. The new method may in future be relevant for the production of drugs. In the future, the new method may be relevant for the production of pharmaceuticals, among other things.

Credit: 
University of Münster

Math that feels good

image: An undergraduate mathematics textbook that was translated into Braille using a new, automated method.

Image: 
American Institute of Mathematics

Mathematics and science Braille textbooks are expensive and require an enormous effort to produce -- until now. A team of researchers has developed a method for easily creating textbooks in Braille, with an initial focus on math textbooks. The new process is made possible by a new authoring system which serves as a "universal translator" for textbook formats, combined with enhancements to the standard method for putting mathematics in a Web page. Basing the new work on established systems will ensure that the production of Braille textbooks will become easy, inexpensive, and widespread.

"This project is about equity and equal access to knowledge," said Martha Siegel, a Professor Emerita from Towson University in Maryland. Siegel met a blind student who needed a statistics textbook for a required course. The book was ordered but took six months (and several thousand dollars) to prepare, causing the student significant delay in her studies. Siegel and Al Maneki, a retired NSA mathematician who serves as senior STEM advisor to the National Federation of the Blind and who is blind himself, decided to do something about it.

"Given the amazing technology available today, we thought it would be easy to piece together existing tools into an automated process," said Alexei Kolesnikov. Kolesnikov, a colleague of Siegel at Towson University, was recruited to the project in the Summer of 2018. Automating the process is the key, because currently Braille books are created by skilled people retyping from the printed version, which involves considerable time and cost. Converting the words is easy: Braille is just another alphabet. The hard part is conveying the structure of the book in a non-visual way, converting the mathematics formulas, and converting the graphs and diagrams.

The collaboration which solved the problem was formed in January, 2019, with the help of the American Institute of Mathematics, through its connections in the math research and math education communities.

"Mathematics teachers who have worked with visually impaired students understand the unique challenges they face," said Henry Warchall, Senior Adviser in the Division of Mathematical Sciences at the National Science Foundation, which funds the American Institute of Mathematics. "By developing an automated way to create Braille mathematics textbooks, this project is making mathematics significantly more accessible, advancing NSF's goal of broadening participation in the nation's scientific enterprise."

There are three main problems to solve when producing a Braille version of a textbook. First is the overall structure. A typical textbook uses visual clues to indicate chapters, sections, captions, and other landmarks. In Braille all the letters are the same size and shape, so these structural elements are described with special symbols. The other key issues are accurately conveying complicated mathematics formulas, and providing a non-visual way to represent graphs and diagrams.

The first problem was solved by a system developed by team member Rob Beezer, a math professor at the University of Puget Sound in Washington. Beezer sees this work as a natural extension of a dream he has been pursuing for several years. "We have been developing a system for writing textbooks which automatically produces print versions as well as online, EPUB, Jupyter, and other formats. Our mantra is Write once, read anywhere." Beezer added Braille as an output format in his system, which is called PreTeXt. Approximately 100 books have been written in PreTeXt, all of which can now be converted to Braille.

Math formulas are represented using the Nemeth Braille Code, initially developed by the blind mathematician Abraham Nemeth in the 1950s. The Nemeth Braille in this project is produced by MathJax, a standard package for displaying math formulas on web pages. Team member Volker Sorge, of the School of Computer Science at the University of Birmingham, noted, "We have made great progress in having MathJax produce accessible math content on the Web, so the conversion to Braille was a natural extension of that work." Sorge is a member of the MathJax consortium and the sole developer of Speech Rule Engine, the system that is at the core of the Nemeth translation and provides accessibility features in MathJax and other online tools.

"Some people have the mistaken notion that online versions and screen readers eliminate the need for Braille," commented project co-leader Al Maneki. Sighted learners need to spend time staring at formulas, looking back and forth and comparing different parts. In the same way, a Braille formula enables a person to touch and compare various pieces. Having the computer pronounce a formula for you is not adequate for a blind reader, any more than it would be adequate for a sighted reader.

It will be particularly useful for visually impaired students to have simultaneous access to both the printed Braille and an online version.

Graphs and diagrams remain a unique challenge for representing non-visually. Many of the usual tools of presenting information using color or thickness of a line, shading, etc., are not available in tactile graphics. The tips of our fingers have a much lower resolution than our eyes, so the size of the image has to be bigger (yet still fit on the page). The labels that are included in the picture have to be translated to Braille, and placed so that they do not interfere with the drawn lines. Diagrams that show three-dimensional shapes are particularly hard to "read" in a tactile format. Ongoing work will automate the process of converting images to tactile graphics.

This work is part of a growing effort to create high-quality free textbooks. Many of the textbooks authored with PreTeXt are available at no cost in highly interactive online versions, in addition to traditional PDF and printed versions. Having Braille as an additional format, produced automatically, will make these inexpensive textbooks also available to blind students.

The group has begun discussions with professional organizations to incorporate Braille output into the production system for their publications.

Credit: 
American Institute of Mathematics

A technology for embedding data in printed objects

image: Embedding information in a printed object.

Image: 
Optical Media Interface Lab, NAIST

A team from Nara Institute of Science and Technology (NAIST), composed of Ph.D. Student Arnaud Delmotte, Professor Yasuhiro Mukaigawa, Associate Professor Takuya Funatomi, Assistant Professor Hiroyuki Kubo, and Assistant Professor Kenichiro Tanaka, has developed a new method to embed information in a 3D printed object and retrieve it using a consumer document scanner. Information such as a serial ID can be embedded without modifying the shape of the object, and be simply extracted from a single image of a commercially available document scanner.

There are several technologies for 3D printing, but the most commonly used consists of deposing layers of molten plastic on top of each other. This method is known as Fused Deposition Modeling (FDM). Generally, plastic deposition is performed with layers of constant thickness. However, in the proposed method, pairs of vertically adjacent layers are selected, and their thickness balance is modified according to the information to be embedded. This thickness balance modification has little effect on the external shape of the object.

Additionally, the thickness of the printed layers can be measured precisely by scanning the object with a document scanner. The developed method allows to detect changes in layer thickness and extract the embedded information.

The results of this research were published in the international academic journal IEEE Transactions on Multimedia (TMM) on December 25, 2019.

Background and purpose:

"Digital Watermarking" is a technology that embeds information inside digital contents such as image, audio, video, and 3D models. Some methods, such as barcode and QR code, embed information in a visible way. Other methods embed it covertly, with the additional information hidden in the content and not perceivable by the user. Since 2010, the 3D printing technology has increasingly gained popularity, leading to a growing interest in the watermarking technology for 3D printed objects. In this research, we proposed a new method to embed a watermark during the printing of an object, and we focused on minimizing the distortion on the outer shape to prevent perturbation on the original function of the object.

Characteristics:

There are several technologies for 3D printing. Among them, Fused Deposition Modeling (FDM) is the most commonly used. It consists of deposing layers of molten plastic on top of each other. The desired shape is obtained by precisely controlling the position and flow of a printing nozzle such that the deposed plastic layers have a controlled path and thickness. Generally, the plastic flow is controlled to produce a constant layer thickness. However, in our method, the plastic flow is modified during the print to locally change the layer thickness to embed some additional information. In order to prevent the degradation of the external surface of the object, pairs of vertically adjacent layers are selected and the ratio of their respective thicknesses is modified while keeping constant the sum of the two layer thicknesses. Since a standard layer thickness is about 0.2mm, information can be embedded in a relatively small area ranging from several millimeters to a few centimeters.

To retrieve the embedded information, it is necessary to measure the thickness of the layers. However, our method can do this measurement using only a common document scanner, and does not require any special equipment. The FDM printing process naturally produce some layering artifacts that are visible in the images obtained by a document scanner. These artifacts allow us to measure the thickness of the layers and extract the information.

Future perspective:

With this method, it is possible to embed various types of information such as an URL that can be linked to Web services, a unique ID that can be used for product tracing, and the printer ID and printing date for batch quality management.

Credit: 
Nara Institute of Science and Technology

New method detects toxin exposure from harmful algal blooms in human urine

image: Adam Schaefer, MPH, co-author and an epidemiologist at FAU's Harbor Branch (center) and faculty and collaborators from FAU's Christine E. Lynn College of Nursing, collected urine, nasal swabs and blood from residents of St. Lucie, Indian River, Palm Beach and Martin counties as a part of a cross-sectional exposure study to assess human exposure to microcystins during the 2018 algal blooms.

Image: 
Florida Atlantic University

Blooms of toxin-producing algae exploded in both fresh and salt water ecosystems in southern Florida during the summer months of 2018, impacting wildlife and humans living in these marine environments. During harmful algal blooms, species of cyanobacteria release toxic peptides, including microcystins and nodularin into waterways.

Human exposure comes from ingestion, direct skin contact, or inhalation and can lead to a variety of symptoms ranging from gastroenteritis, nausea, allergic reactions and skin rashes to hepatic injury and hemorrhage in more severe cases. Microcystins also have been linked to tumor progression and are harmful to renal, immune and reproductive systems.

A researcher from Florida Atlantic University's Harbor Branch Oceanographic Institute collaborated with the United States Centers for Disease Control and Prevention to test a newly developed immunocapture protein phosphatase inhibition assay (IC-PPIA) method for detection of microcystins and nodularin in human urine. This method uses a commercially available antibody to specifically isolate microcystins and nodularine from human urine prior to measurement.

Results of the study, published in the journal Toxins, demonstrate that the IC-PPIA method developed by the CDC was able to detect low-dose human exposures to microcystins by analysis of urine from three of the 86 urine specimens analyzed by this new method, which yielded positive results with concentrations of 0.055, 0.089 and 0.052 ng/mL MC-LR equivalents. These findings are the first to report microcystin concentrations directly from exposed residents impacted by cyanobacteria in Florida.

"This new test can detect even low-dose human exposure to microcystins and nodularin, so this method will be important as we study the long-term health impacts of harmful algal blooms, especially the low-level concentrations from human inhalation exposure," said Adam Schaefer, MPH, co-author and an epidemiologist at FAU's Harbor Branch. "This method could complement water monitoring programs by identifying human exposures to these toxins at the time of harmful algal blooms and will assist our ongoing research to elucidating health effects associated with these algal blooms. This research is a critical step in developing and interpreting clinical diagnostic tests for harmful algal bloom exposure around the world."

To assess human exposure to microcystins during the 2018 algal blooms, Schaefer and faculty and collaborators from FAU's Christine E. Lynn College of Nursing, collected urine, nasal swabs and blood from residents of St. Lucie, Indian River, Palm Beach and Martin counties as a part of a cross-sectional exposure study. A comprehensive questionnaire that included questions on potential routes of exposure to the blooms, fish consumption, and demographic data was administered at the time of sample collection.

Credit: 
Florida Atlantic University

Older undiagnosed sleep apnea patients need more medical care

image: Older adults with undiagnosed obstructive sleep apnea seek more health care, according to a study published in the January issue of the Journal of Clinical Sleep Medicine. The study examined the impact of untreated sleep apnea on health care utilization and costs among Medicare beneficiaries.

Image: 
Journal of Clinical Sleep Medicine

DARIEN, IL - Older adults with undiagnosed obstructive sleep apnea seek more health care, according to a study published in the January issue of the Journal of Clinical Sleep Medicine. The study examined the impact of untreated sleep apnea on health care utilization and costs among Medicare beneficiaries.

The authors reviewed a sample of Medicare claims data and found that patients diagnosed with sleep apnea sought medical care more frequently and at higher cost in the 12 months prior to their diagnosis than patients without the sleep disorder. Compared with the control group, those with untreated sleep apnea had greater health care utilization and costs across all points of service, including inpatient, outpatient, emergency and prescription medications in the year leading up to their sleep apnea diagnosis.

Researchers at the University of Maryland and Columbia University also observed that Medicare patients with sleep apnea were more likely to suffer from other ailments than those without the sleep disorder. Sleep apnea is linked to an increased risk for high blood pressure, diabetes, heart disease, stroke and depression. The study authors suggest that insurers, legislators and health system leaders consider routine screening for sleep apnea in older patients, especially those with medical and psychiatric comorbidities, to better contain treatment costs.

"Sleep disorders represent a massive economic burden on the U.S. health care system," said lead author Emerson Wickwire, Ph.D., associate professor of psychiatry and medicine at the University of Maryland School of Medicine, and director of the insomnia program at the University of Maryland Medical Center - Midtown Campus. "Medicare beneficiaries with obstructive sleep apnea cost taxpayers an additional $19,566 per year. It's important to realize that costs associated with untreated sleep disorders are likely to continue to accrue year after year, which is why our group focuses on early recognition and treatment."

Nearly 30 million adults in the U.S. have obstructive sleep apnea, a chronic disease that involves the repeated collapse of the upper airway during sleep. Common warning signs include snoring and excessive daytime sleepiness. A common treatment is PAP therapy, which uses mild levels of air pressure, provided through a mask, to keep the throat open during sleep.

A 2016 report commissioned by the American Academy of Sleep Medicine estimated that undiagnosed sleep apnea among U.S. adults costs $149.6 billion annually. While the report projected it would cost the health care system nearly $50 billion to diagnose and treat every American adult with sleep apnea, treatment would produce savings of $100 billion. The current study in JCSM is the largest analysis to date of the economic burden of untreated sleep apnea among older adult Medicare beneficiaries.

"The good news," explained Dr. Wickwire, "is that highly effective diagnostic and treatment strategies are available. Our team is currently using big data as well as highly personalized sleep disorders treatments to improve outcomes and reduce costs associated with sleep disorders."

In a related commentary also published in the January JCSM, Meir Kryger, M.D., applauds the study's authors for their analysis of the economic impact of sleep apnea among older adults, highlighting the need for early diagnosis and treatment. He notes that the findings confirm that sleep apnea patients are heavy users of health care five to 10 years before diagnosis of sleep apnea and echoes the researchers call for improved sleep apnea detection and screening.

Credit: 
American Academy of Sleep Medicine