Brain

Tracking life's first step: Two molecules 'awaken' brand new genome

Within hours after fertilization, a unique genome forms from chromosomes contributed by the egg and sperm. However, this new genome is initially inactive and must be "awakened" to begin the transcription of its DNA and start embryonic development. How life's first step happens has long fascinated developmental biologists.

"This is the beginning of life from a molecular standpoint," said Antonio Giraldez, the Fergus F. Wallace Professor of Genetics and chair of the Department of Genetics at Yale. "What hasn't been clear is how and what kick-starts the transcriptional activity of the embryonic genome, so that the embryo starts taking control of its own development with its own genetic blueprint."

Giraldez and his research team at Yale found that two specific factors are needed to activate DNA in the newly formed genome, they report June 17 in the journal Developmental Cell. Using a combination of microscopy and RNA sequencing analysis, they revealed the pattern of transcription within the nucleus of a zebrafish embryo.

The researchers also developed tools to visualize the first activated gene in the zebrafish genome, known as miR-430, and trace the gene's activity in living embryos.

"Now we can visually look inside the nucleus and analyze how the first gene within the silent genome is awakened," said Shun Hang Chan lead author of the study and a doctoral student working in the Giraldez lab.

The activation of the zebrafish genome requires the presence of two proteins -- P300 and Brd4 -- produced by the mother. Disruption of the activity of these two proteins using small molecule inhibitors prevents the activation of the embryonic genome, which in turn blocks development of the embryo. However, researchers found that even after they blocked the production of these maternal proteins, they could overcome this block and prematurely kick-start activation of the embryonic genome through the artificial introduction of P300 and Brd4.

"These molecular factors act as a sort of molecular timer, which sets the timing of genome's awakening," Giraldez said. "Finding these key factors involved in genome activation serve as the critical first step towards our understanding of how life begins."

Credit: 
Yale University

Two genes implicated in development of prostate enlargement, Stanford study finds

For aging men, prostate enlargement is almost as common as graying hair, and yet scientists know very little about why the prostate increases in size or how the process occurs on a molecular level.

In a new study, scientists at the Stanford University School of Medicine have discovered a molecular pattern that flags prostate enlargement, also called benign prostatic hyperplasia, and have even identified two genes that likely play a role in the development of the condition.

The urethra runs directly through the prostate, a gland in the male reproductive system. And while a bigger prostate is not typically life-threatening in itself, it can cause urinary-related symptoms that range from niggling to severe. When the prostate becomes enlarged, it squeezes the urinary tube, causing problems such as incontinence or urinary urgency.

"It can be a terrible bother and, in the most severe cases, can even lead to kidney failure," said James Brooks, MD, professor of urology. Today's treatments work to an extent, but don't completely solve the issues, he added. "Urology as a field needs to do more to own this problem and figure out what the true underlying causes are so we can curb its prevalence and help treat it more effectively."

The new study is one of the first to describe a molecular landscape that differentiates enlarged prostate tissue from normal tissue. The team of scientists also discovered that the cell growth behind a ballooning prostate is not uniform. Several cell types comprise the prostate, and abnormal growth appears to come from an outburst of specific sets of cells, rather than an overall increase of all cell types.

A paper describing the study is available online now and will be included in the June 20 issue of JCI Insight. Brooks and professors of pathology Jonathan Pollack, MD, PhD, and Robert West, MD, PhD, share senior authorship. Former MD-PhD student Lance Middleton is the lead author.

The plight of the prostate

No other gland in the human body, male or female, expands so predictably with age.

Fifty percent of men who are 50 years old have an enlarged prostate, and with every decade, that number increases by 10 percent (60 percent of men who are 60, 70 percent of men who are 70 and so on). A normal prostate is about the size of a walnut, but it can grow to twice that size, sometimes more.

"Researchers have hunted for mutations or growth factors that could trigger prostate growth, but there hasn't been much progress in finding a true cause," Brooks said.

Brooks, Pollack and West took a multipronged approach in search of the answer, analyzing 49 tissue samples from patients who had their prostates removed. The odd thing about prostate enlargement, Brooks said, is that the entire prostate doesn't grow in unison; only certain parts of it expand. Some areas of the prostate actually remain unchanged.

Genomic analysis showed that most of the enlarged areas of the prostates consisted primarily of two types of cells -- epithelial, which make up secretory glands, and stromal fibroblasts, which create structural parts of the prostate. That's not normal, Brooks said, and it clued the researchers into a new understanding of prostate growth: Only some cell types multiply in an enlarged prostate, taking over -- and sometimes eliminating -- other cell types, like weeds in a garden plot.

"So it's not just an increase in cells; it's a fundamental shift in the type of cells that make up the prostate. It's something we've termed 'cellular relandscaping,'" Pollack said. "It's possible that this shift is actually related to the disease progression, and not just arbitrary." One of the overrun cell types, Pollack said, is thought to be involved in the regulation of epithelial cell growth and development.

65-gene signature

Beyond cell type, the researchers analyzed the molecular state of normal and enlarged prostate tissues, looking at data that showed which genes were active in enlarged prostate samples and which were active in normal samples. By comparing gene activity, they found 65 genes whose expression patterns strongly correlated with prostate enlargement. In other words, tissue samples of enlarged prostates reliably showed this gene signature, whereas healthy samples did not. What's more, patients whose prostate tissues strongly correlated with this gene signature reported more severe symptoms.

While the overall signature is only correlation at this point, Brooks and Pollack have singled out two genes involved in cell signaling that they suspect may play a role in the condition's development. One, CXCL13, codes for a protein involved in immune cell recruitment, which Pollack said makes sense because prostate enlargement involves inflammation. The other gene, BMP5, codes for a molecule involved in cell identity and development. Whereas CXCL13 effects are complicated to model in the lab, it's relatively easy to manipulate BMP5. So the researchers rigged an experiment to test if adding a BMP5-laden concoction could change the characteristics of normal prostate tissue. They found that healthy prostate samples could be coerced into expressing the 65-gene signature seen in enlarged prostates.

"They even start to proliferate a little bit," Brooks said. "It's quite remarkable that with this one molecule, we can turn healthy samples into samples that mirror the molecular landscape of an enlarged prostate."

It's still early in the research, the scientists said, and more work needs to be done to confirm the role of BMP5 and CXCL13. But it's a promising step toward finding new avenues for drug development.

Credit: 
Stanford Medicine

Moral emotions, a diagnotic tool for frontotemporal dementia?

Paris, France --June 14, 2019- A study conducted by Marc Teichmann and Carole Azuar at the Brain and Spine Institute in Paris (France) and at the Pitié-Salpêtrière Hospital shows a particularly marked impairment of moral emotions in patients with frontotemporal dementia (FTD). The results, published in the Journal of Alzheimer's Disease, open a new approach for early, sensitive and specific diagnosis of FTD.

Frontotemporal dementia is a cognitive and behavioral disease caused by degenerative alteration of anterior regions of the brain. The disease is characterized by behavioral disorders such as a progressive apathy, loss of interest, social withdrawal, loss of inhibition and the processing of emotions.

We have known for a long time that these patients demonstrate impairment of emotion recognition and of theory of mind i.e. the ability to figure out the mental states of others: what they think, what they feel, what they like... But does this emotional blunting also affect a specific kind of emotions called moral emotions, which are crucial for human interactions? » asks Marc Teichmann, coordinator of the study.

Moral emotions can be defined as &laquo affective experiences promoting cooperation and group cohesion » including emotions such as admiration, shame or pity. They are distinct from other emotions in that they are strongly linked to the cultural context, moral rules and innate moral representations. In the context of FTD, which are primarily characterized by an impairment of behavior and social interactions, studying these particular set of emotions is a major issue to better understand the disease and to refine diagnostic accuracy.

In the present study, researchers and clinicians from the ICM - Brain and Spine Institute and the Pitié-Salpêtrière Hospital developed a test to assess moral emotions. It is composed of 42 scenarios for which the subject has to select, out of 4 response possibilities, the feeling s/he has in the scenario situation. La performance des patients FTD (N=22) are compared to the performance of 45 healthy subjects and to 15 patients with Alzheimer's disease. To evaluate the specificity of the impairment of moral emotions in FTD the researchers contrasted the 42 moral scenarios involving an inter-human context and eliciting moral emotions with scenarios eliciting similar emotions without any mral valence. For example, it is possible to feel admiration for both an altruistic act and the architecture of a building. In both cases, the emotion is identified as admiration but the context is entirely different (moral versus extra-moral).

The results show that moral emotions are much more impaired than emotions without moral valence. In contrast, patients with Alzheimer's disease had no impairment as compared to healthy subjects and they had similar performance with moral and extra-moral emotions.

Our findings confirm that emotions in general are impaired in FTD and they reveal a particularly profound alteration of moral emotions. Our novel test tool appears to provide an early, sensitive and specific marker for FTD diagnosis while reliably distinguishing FTD from Alzheimer's disease patients. It could also be a marker for other diseases involving the breakdown of moral emotions as for example in the case of psychopathic individuals. » concludes Marc Teichmann.

Credit: 
IOS Press

Uniform-shape polymer nanocrystals created

image: Concept to obtain uniform size and shape particles by controlled polymerization on a molecular as well as particle level.

Image: 
Stefan Mecking and Manuel Schnitte

A team of researchers from the University of Konstanz's CRC 1214 "Anisotropic Particles as Building Blocks: Tailoring Shape, Interactions and Structures", which has been funded by the German Research Foundation (DFG) since 2016, has demonstrated a new aqueous polymerization procedure for generating polymer nanoparticles with a single chain and uniform shape, which, as another difference to previous methods, involves high particle concentrations. A corresponding paper entitled "Uniform shape monodisperse single chain nanocrystals by living aqueous catalytic polymerization" is due for open access online publication in Nature Communications on 13.06.2019.

Nanoparticles are the building blocks for envisioned nanoparticle-based materials with yet unachieved optical, electronic and mechanical properties. To build nanomaterials, nanoparticles with uniform shapes and sizes are required. While inorganic metal or metal oxide nanoparticles suitable for assembly can be generated in a variety of shapes, it has been very difficult until now to manufacture polymer nanoparticles in shapes other than spheres, as Stefan Mecking, Professor of Chemical Materials Science at the University of Konstanz's Department of Chemistry and Vice Speaker of CRC 1214, points out: "In previous approaches, single-chain particles were prepared by post-polymerization collapse or assembled from solutions of separately synthesized chains. What we have managed to do is to demonstrate direct polymerization to single-chain uniform-shape monodisperse nanocrystals for polyethylene, which is the largest and most important synthetic polymer material".

One major challenge associated with this approach is to achieve living chain and particle growth that can be sustained for several hours and up to very high molecular weights, ideally yielding single-chain nanocrystals of ultra-high molecular weight polyethylene. To achieve this, the researchers developed advanced catalysts. "We then conducted a series of pressure reactor tests to identify ideal conditions for maintaining catalytic activity over longer periods of time and to gain insights into the chain and particle growth process", explains Mecking. "In addition to the novel catalysts, control of the colloidal state of the reaction mixture is another key element in obtaining the desired aqueous particle dispersions".

In contrast to many post-polymerization procedures, the aqueous polymerization procedure elaborated by Stefan Mecking and his team yields high particle number densities, which are comparable to commercial polymer dispersions used for coatings, paints and other applications. Using transmission electron microscopy (TEM), the researchers were able to confirm that the particles thus generated are composed of a single chain, display a uniform shape and size distribution and do not aggregate. "While our assemblies may not fully match the extensively optimized assemblies of inorganic nanoparticles, they seem to be very promising", concludes Mecking. "In time, our insights into the creation of anisotropic polymer nanocrystals using aqueous catalytic polymerization may enable us to create polymer materials based on nanoparticle assembly".

Credit: 
University of Konstanz

Many choices seems promising until you actually have to choose

BUFFALO, N.Y. - People faced with more options than they can effectively consider want to make a good decision, but feel they're unable to do so, according to the results of a novel study from the University at Buffalo that used cardiovascular measures and fictional dating profiles to reach its conclusions.

Despite the apparent opportunities presented by a lot of options, the need to choose creates a "paralyzing paradox," according to Thomas Saltsman, a graduate student in the UB Department of Psychology and co-author of the study with Mark Seery, an associate professor of psychology at UB.

"You want to make a good choice, but feel like you can't," says Saltsman. "This combination of perceiving high stakes and low ability may contribute to a deep-seated fear that one will inevitably make the wrong choice, which could stifle the decision-making process."

To manage the seemingly unmanageable, Saltsman says to consider the relative importance of the choice at hand.

"Choosing the wrong menu item for dinner or what to binge-watch is not going to define you as a person," he says. "It may also be helpful to enter high-choice situations with a few clear guidelines of what you want from your desired option. Doing so may not only help scale down the number of possible choices, by eliminating options that do not meet your guidelines, but may also bolster confidence and trust in your ability to find a choice that meets your needs."

The findings are published in the journal Biological Psychology.

Previous research clearly establishes how choice overload is associated with negative outcomes, but this research looks specifically at two understudied motivational factors of decision-making: how valuable is the decision to someone and to what extent do people view themselves as capable of making a good choice.

Having choices seems like an appealing situation that speaks to freedom and autonomy. But the emerging digital realities manifest in forums like online shopping and entertainment can be overwhelming.

Searching online for a spring jacket can return thousands of hits. One streaming service claims to offer more than 7,000 titles, while online dating services can enroll millions of subscribers.

All of those choices seems like a great idea, according to Seery. Until you're actually the one having to choose.

"We love having these choices, but when we're actually faced with having to choose from among those countless options, the whole process goes south," says Seery. "Research shows that, after the fact, people often regret their decision in these cases, but what our research suggests is that this kind of turn - the inherent paradox of liking choices and then being troubled by choices - happens almost immediately.

"That transition is fascinating."

For the research, the team had nearly 500 participants across three different experiments, two of which used psychophysiological measures.

"We had participants reading through what were fictional dating profiles and asked them to consider their ideal partner," says Saltsman. "Because we used psychophysiological measures, we wanted people faced with a choice that demanded consideration and had them actively engaged."

Those measures include heart rate and how hard the heart is pumping. When people care more about a decision, Seery says, their heart rate increases and beats harder. Other measures, like how much blood the heart is pumping and the degree to which blood vessels dilate, indicate levels of confidence.

The results showed that when faced with a large number of profiles to choose from rather than a small number, participants' hearts and blood vessels revealed that they experienced making their choice as being both more important and more overwhelming. This occurred during the deliberation process.

Although additional work is needed, this study can help us understand the relationship between choice overload and negative outcomes.

"Examining people's experiences in the moment may ultimately help us better understand those negative downstream choice overload outcomes and how to prevent them," says Saltsman.

Credit: 
University at Buffalo

New application can detect Twitter bots in any language

Thanks to fruitful collaboration between language scholars and machine learning specialists, a new application developed by researchers at the University of Eastern Finland and Linnaeus University in Sweden can detect Twitter bots independent of the language used.

In recent years, big data from various social media applications have turned the web into a user-generated repository of information in ever-increasing number of areas. Because of the relatively easy access to tweets and their metadata, Twitter has become a popular source of data for investigations of a number of phenomena. These include, for instance, various political campaigns, social and political upheavals, Twitter as a tool for emergency communication, and using social media data to predict stock market prices.

However, research using data from social media data is often skewed by the presence of bots. Bots are non-personal and automated accounts that post content to online social networks. The popularity of Twitter as an instrument in public debate has led to a situation in which it has become an ideal target of spammers and automated scripts. It has been estimated that around 5-10% of all users are bots, and that these accounts generate about 20-25% of all tweets posted.

Researchers of the digital humanities at the University of Eastern Finland and Linnaeus University in Sweden have developed a new application that relies on machine learning to detect Twitter bots. The application is able to detect autogenerated tweets independent of the language used. The researchers captured for analysis a total of 15,000 tweets in Finnish, Swedish and English. Finnish and Swedish were mainly used for training, whereas tweets in English were used to evaluate the language independence of the application. The application is light, making it possible to classify vast amounts of data quickly and relatively efficiently.

"This enhances the quality of data - and paints a more accurate picture of the reality," Professor of English Mikko Laitinen from the University of Eastern Finland notes.

According to Professor Laitinen, bots are relatively harmless, whereas trolls do harm as they spread fake news and come up with made-up stories. This is why there's a need for increasingly advanced tools for social media monitoring.

"This is a complex issue and requires interdisciplinary approaches. For instance, we linguists are working together with machine learning specialists. This type of work also calls for determination and investments in research infrastructures that serve as a platform for researchers from different fields to collaborate on."

According to Professor Laitinen, it is essential for researchers to have access to social media data.

"Currently, data are the property of American technology conglomerates, and a source of their income. In order for researchers to gain access to this data, cooperation at the national and international levels, and especially the involvement of the EU are needed."

Credit: 
University of Eastern Finland

Genes for Good project harnesses Facebook to reach larger, more diverse groups of people

In 2015, a group of researchers hypothesized that our collective love of Facebook surveys could be harnessed for serious genetic studies. Today, the Genes for Good project (@genesforgood) has engaged more than 80,000 Facebook users, collected 27,000 DNA spit-kits, and amassed a trove of health survey data on a more diverse group of participants than has previously been possible. Researchers say their app could work as a model for studies on an even larger scale. Their work appears June 13 in The American Journal of Human Genetics.

"It's a very important step to allow participation remotely, because it opens the door to a lot of people who historically couldn't participate in genetic research, even if they had wanted to," says Katharine Brieger, a first author and MD/PhD student at the University of Michigan School of Public Health. "And having a more diverse population represented in study samples is critical for moving public health and genetic research forward."

"When I started doing genetic studies in the 90s, most studies just had a few hundred people," says senior author Goncalo Abecasis (@gabecasis), of the University of Michigan School of Public Health. Typically, people in the area would show up to a university lab to answer health surveys and give a blood sample. After that, researchers had a very costly and difficult time following up with those volunteers.

"You quickly got to a point where you exhausted what you could learn from those participants," Abecasis says. That experience inspired him and his colleagues to start thinking about how to use social media to expand and improve upon their research. The result was Genes for Good's approach: in exchange for answering surveys, participants receive a free in-home DNA spit-kit, analysis of their ancestry and DNA results, graphs and comparisons of their data, and (if requested) a file of their raw genotype information.

The researchers explored whether the study participants were a good representation of the U.S. population. Using government statistics as a comparison, they found the volunteers had similar disease rates and demographics as the rest of the country--although they were a little younger, had slightly fewer strokes, and skewed female. The participants also had diverse ancestry and were diverse geographically and economically. Most Genes for Good participants fell into the U.S. middle household income bracket of $35,000 to $100,000 a year. In contrast, most 23andMe users have a household income of more than $100,000 a year, according to a poster 23andMe presented in 2011 describing their research cohort demographics.

The researchers then analyzed the genetic data to assess the quality of the study and its data collection methods. Previous studies have identified genetic variants linked to physical traits, such as eye color or skin tone, and to health conditions such as asthma. When researchers compared results from their Genes for Good analyses to those from well-cited papers, they largely matched.

"We were quite pleased with our ability to replicate the findings of other large studies," says Brieger. "For example, in our sample, we were able to identify previously reported associations between specific genetic variants and traits such as BMI, as well as conditions such as type 1 and type 2 diabetes."

Genes for Good launched in 2015, and the number of participants quickly grew from a trickle to a deluge. "We get something like 2 or 3 percent growth every week, which corresponds to tripling every year," says Abecasis. The growth was so massive that researchers have had to pause the distribution of DNA spit-kits until they find more funding. With the proper funding, the researchers say Genes for Good can scale up to reach millions of users.

Social media dramatically increased Genes for Good recruitment, but it also created new privacy concerns. By design, the app only works as a portal on Facebook to connect users to Genes for Good servers. Genes for Good adheres to the University of Michigan's privacy standards and is subject to the university's white hat security tests. To access their raw genome data, participants go through two-factor authentication and must retrieve their data within three days. The researchers went further and asked the National Institutes of Health (NIH) for additional layers of privacy protection.

"We have a certificate of confidentiality, meaning we have a promise from the NIH that our data will not be used by the government," says Abecasis.

The researchers noticed that the participants were more forthright online in answering personal health questions than volunteers typically are in face-to-face interviews. They think connecting with study participants via Facebook will open the path for more in-depth studies in the future. Maintaining contact with participants through Facebook will allow the group to do longitudinal studies of health behaviors. Additionally, Facebook provides excellent opportunities to reach online communities of people with rare diseases.

"What we would really like to do next is to use the platform to see if this is an opportunity to engage with disease foundations for targeted studies at a very large scale," says Abecasis.

Credit: 
Cell Press

Migratory hoverflies 'key' as many insects decline

image: Marmalade Hoverfly -- Episyrphus balteatus.

Image: 
Will Hawkes

Migratory hoverflies are "key" to pollination and controlling crop pests amid the decline of many other insect species, new research shows.

University of Exeter scientists studied the movements of migratory hoverflies and were surprised to find up to four billion migrate to and from Britain each year.

The study shows these numbers have been relatively stable over the last decade, and such abundance means migratory hoverflies pollinate many billions of flowers and produce larvae that eat up to ten trillion aphids.

Recent research has suggested more than 40% of insect species worldwide are "threatened with extinction", creating a major threat to "ecosystem services" (benefits to humans from the natural environment, such as pollination of crops).

"The number of migrating hoverflies coming and going over Britain was much higher than we had expected," said Dr Karl Wotton, Royal Society research fellow at the University of Exeter.

"They are widely considered to be the second most important pollinators, after bees.

"They are especially important pollinators of wildflowers, soft fruits and brassica crops, and their larvae prey on various species of aphids - which are the key crop pest in Europe.

"This dual role makes them uniquely beneficial to humans."

Migrating hoverflies arrive in Britain in spring and, with a month-long life cycle, those that leave are descendants of the spring arrivals.

"We are net exporters of hoverflies," said Dr Jason Chapman, of the Centre for Ecology and Conservation on the University of Exeter's Penryn Campus in Cornwall.

"Each female can lay up to 400 eggs and, though many die as eggs or larvae, the departing population in autumn is larger than that arriving in spring.

"As well as their vital pollinating and aphid-eating roles, migrating hoverflies provide food for a range of predators including birds."

The study, supported by colleagues at Nanjing Agricultural University, Rothamsted Research, the University of Greenwich and the Max Planck Institute, used radar data on insects flying between 150m and 1km above the ground.

The hoverflies wait for favourable winds before migrating between Britain and mainland Europe.

Dr Chapman added: "Migrating insects are generally bucking the trend of decline that we're seeing with many other insects.

"Their mobility is probably a key part of this, as it allows them to move on to find the best habitats.

"Hoverflies are also generalists - the adults feed on many kinds of pollen and the larvae eat many aphid species.

"Considering that many beneficial insects are seriously declining, our results demonstrate that migrant hoverflies are key to maintaining essential ecosystem services."

Credit: 
University of Exeter

Does pornography use affect heterosexual women's intimate experiences with a partner?

image: Journal of Women's Health is a core multidisciplinary journal dedicated to the diseases and conditions that hold greater risk for or are more prevalent among women, as well as diseases that present differently in women.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, June 12, 2019--A new study has shown that the relationship between pornography and intimate partner experiences among heterosexual women is indirect and complex, in contrast to the more direct link among heterosexual men. Thoughts of pornographic material during intimate partner experiences, rather than simply having viewed pornographic material previously, was associated with high rates of appearance insecurity and reduced enjoyment of intimate acts during sex, according to the study published in Journal of Women's Health, a peer-reviewed publication from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article on the Journal of Women's Health website through July 12, 2019.

The article entitled "Pornography and Heterosexual Women's Intimate Experiences with a Partner" was coauthored by Jennifer Johnson, PhD, Virginia Commonwealth University (Richmond), Matthew Ezzell, PhD, James Madison University (Harrisonburg, VA), Ana Bridges, PhD, University of Arkansas (Fayetteville), and Chyng Sun, PhD, New York University (New York City).

The researchers reported that while most U.S. women aged 18-29 in the study sample had seen pornography, fewer than half used it for masturbation. Those women who used it at higher rates for masturbation tended to rely more on pornographic scripts during sex to achieve and maintain arousal and were more likely to prefer pornography to sex with a partner.

"Dr. Johnson and colleagues demonstrate a clear difference between the role of pornography in sexual experiences of women compared to men," says Susan G. Kornstein, MD, Editor-in-Chief of Journal of Women's Health, Executive Director of the Virginia Commonwealth University Institute for Women's Health, Richmond, VA, and President of the Academy of Women's Health. "Whereas the relationship tends to be more direct in young heterosexual men, and just viewing pornographic material is associated with reduced sexual intimacy and satisfaction, women make the material part of their personal sexual experience and carry the pornographic script into their intimate partner experiences."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

The 'AI turn' for digital health: A futuristic view

image: OMICS: A Journal of Integrative Biology addresses the latest advances at the intersection of postgenomics medicine, biotechnology and global society, including the integration of multi-omics knowledge, data analyses and modeling, and applications of high-throughput approaches to study complex biological and societal problems.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, June 12, 2019-- The unprecedented implications of digital health innovations, being co-produced by the mainstreaming and integration of artificial intelligence (AI), the Internet of Things (IoT), and cyber-physical systems (CPS) in healthcare, are examined in a new technology horizon-scanning article. This digital transformation of healthcare is facilitated by the rapid rise in Big Data and real-time Big Data analytics. The detailed findings are published in OMICS: A Journal of Integrative Biology, the peer-reviewed interdisciplinary journal published by Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the OMICS: A Journal of Integrative Biology website until July 12, 2019.

Vural Özdemir, MD, PhD, DABCP, Editor-in-Chief of OMICS: A Journal of Integrative Biology is the author of the article entitled "The Big Picture on the 'AI Turn' for Digital Health: The Internet of Things and Cyber-Physical Systems." He explores the current applications of AI to life sciences and digital health, for example, in interpreting the massive amounts of data generated by genomics and other omics applications. Dr. Özdemir describes the IoT and provides digital health-related examples of CPS, such as wearables for cardiac monitoring and healthcare robots.

"Digital data are highly fluid and can rapidly move across spaces and places whereas the physical data and humans are much slower and exist in different scales than our digital footprints," says Vural Özdemir. "It is therefore timely for the system sciences and integrative biology communities to critically engage with digital health and the related technologies such as AI, IoT and CPS."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

How we tune out distractions

CAMBRIDGE, MA - Imagine trying to focus on a friend's voice at a noisy party, or blocking out the phone conversation of the person sitting next to you on the bus while you try to read. Both of these tasks require your brain to somehow suppress the distracting signal so you can focus on your chosen input.

MIT neuroscientists have now identified a brain circuit that helps us to do just that. The circuit they identified, which is controlled by the prefrontal cortex, filters out unwanted background noise or other distracting sensory stimuli. When this circuit is engaged, the prefrontal cortex selectively suppresses sensory input as it flows into the thalamus, the site where most sensory information enters the brain.

"This is a fundamental operation that cleans up all the signals that come in, in a goal-directed way," says Michael Halassa, an assistant professor of brain and cognitive sciences, a member of MIT's McGovern Institute for Brain Research, and the senior author of the study.

The researchers are now exploring whether impairments of this circuit may be involved in the hypersensitivity to noise and other stimuli that is often seen in people with autism.

Miho Nakajima, an MIT postdoc, is the lead author of the paper, which appears in the June 12 issue of Neuron. Research scientist L. Ian Schmitt is also an author of the paper.

Shifting attention

Our brains are constantly bombarded with sensory information, and we are able to tune out much of it automatically, without even realizing it. Other distractions that are more intrusive, such as your seatmate's phone conversation, require a conscious effort to suppress.

In a 2015 paper, Halassa and his colleagues explored how attention can be consciously shifted between different types of sensory input, by training mice to switch their focus between a visual and auditory cue. They found that during this task, mice suppress the competing sensory input, allowing them to focus on the cue that will earn them a reward.

This process appeared to originate in the prefrontal cortex (PFC), which is critical for complex cognitive behavior such as planning and decision-making. The researchers also found that a part of the thalamus that processes vision was inhibited when the animals were focusing on sound cues. However, there are no direct physical connections from the prefrontal cortex to the sensory thalamus, so it was unclear exactly how the PFC was exerting this control, Halassa says.

In the new study, the researchers again trained mice to switch their attention between visual and auditory stimuli, then mapped the brain connections that were involved. They first examined the outputs of the PFC that were essential for this task, by systematically inhibiting PFC projection terminals in every target. This allowed them to discover that the PFC connection to a brain region known as the striatum is necessary to suppress visual input when the animals are paying attention to the auditory cue.

Further mapping revealed that the striatum then sends input to a region called the globus pallidus, which is part of the basal ganglia. The basal ganglia then suppress activity in the part of the thalamus that processes visual information.

Using a similar experimental setup, the researchers also identified a parallel circuit that suppresses auditory input when animals pay attention to the visual cue. In that case, the circuit travels through parts of the striatum and thalamus that are associated with processing sound, rather than vision.

The findings offer some of the first evidence that the basal ganglia, which are known to be critical for planning movement, also play a role in controlling attention, Halassa says.

"What we realized here is that the connection between PFC and sensory processing at this level is mediated through the basal ganglia, and in that sense, the basal ganglia influence control of sensory processing," he says. "We now have a very clear idea of how the basal ganglia can be involved in purely attentional processes that have nothing to do with motor preparation."

Noise sensitivity

The researchers also found that the same circuits are employed not only for switching between different types of sensory input such as visual and auditory stimuli, but also for suppressing distracting input within the same sense -- for example, blocking out background noise while focusing on one person's voice.

The team also showed that when the animals are alerted that the task is going to be noisy, their performance actually improves, as they use this circuit to focus their attention.

Halassa's lab is now doing similar experiments in mice that are genetically engineered to develop symptoms similar to those of people with autism. One common feature of autism spectrum disorder is hypersensitivity to noise, which could be caused by impairments of this brain circuit, Halassa says. He is now studying whether boosting the activity of this circuit might reduce sensitivity to noise.

"Controlling noise is something that patients with autism have trouble with all the time," he says. "Now there are multiple nodes in the pathway that we can start looking at to try to understand this."

Credit: 
Massachusetts Institute of Technology

Parents' lenient attitudes towards drinking linked to greater alcohol use among children

Alcohol use is one of the biggest risk factors for social and physical harm and has been linked to the development of diseases including cancer, diabetes, and liver and heart disease.

Even though the legal age to buy alcohol is 18 years and above in most countries, the 2015 European School Survey Project on Alcohol and Other Drugs found that almost half of 15-16-year-old students had consumed alcohol and 8% had been drunk by the age of 13.

Exposure to alcohol starts from an early age: children as young as two years old become aware of alcohol and are able to distinguish alcoholic from non-alcoholic drinks. From age four on, children start to understand that alcohol is usually restricted to adults and consumed in specific situations. Many studies have connected the parent's behaviour and the home environment with children's alcohol use, but it is still unclear how parental attitudes influence their children's behaviour.

In a study published today in the journal Addiction, Mariliis Tael-Oeren and colleagues at Cambridge's Behavioural Science Group and the School of Health Sciences at the University of East Anglia (UEA) found that children whose parents had less restrictive attitudes towards their child's alcohol use were more likely to start drinking alcohol than their peers. They also drank - and got drunk - more frequently.

The findings come from a review of published articles examining parent-child pairs and the relationship between parental attitudes and their child's alcohol use. A review enables researchers to combine data from a large number of studies, sometimes with conflicting findings, to arrive at a more robust finding. The researchers pooled information from the 29 most relevant articles and analysed all the relevant information, which included data from almost 16,500 children and more than 15,000 parents in the US and Europe.

Mariliis Tael-Oeren, PhD student and lead author for the study, says: "Our study suggests that when parents have a lenient attitude towards their children drinking alcohol, this can lead to their child drinking more frequently - and drinking too much.

"Although the data was based on children and their parents in the US and Europe, we expect that our findings will also apply here in the UK."

Ms Tael-Oeren and colleagues also found a mismatch between what children think is their parent's attitude towards them drinking and what the parent's attitude actually is. Children were no more likely to start drinking alcohol if they perceived their parent to have a lenient attitude, but once they had started drinking, they were more likely to drink often.

"This mismatch doesn't mean that children perceive parental attitudes completely differently from their parents," explains Ms Tael-Oeren. "Instead, it could be that their perceptions are skewed towards thinking their parents have more lenient attitudes. This could be because their parents haven't expressed their attitudes in a way that the children really understand."

"Alcohol use can be problematic, particularly among young people. It's important that children understand the short and long term consequences of drinking. If parents don't want their children to drink, then our study suggests they need to be clear about the message they give out."

Senior author Professor Stephen Sutton says that social norms could lead to confusion among children. "Alcohol use is influenced by a variety of factors, including attitudes and social norms. If the social norm supports parents introducing alcohol to children, children might mistakenly assume that their parents are more lenient, even when this is not the case."

Dr Felix Naughton, from UEA's School of Health Sciences, adds: "Uncovering this mismatch in perceptions is important as it may have implications for parenting programmes designed to support families in reducing childhood alcohol use and indeed for parents who just want to know what they can do to protect their children."

Credit: 
University of Cambridge

Novel denoising method generates sharper photorealistic images faster

image: At SIGGRAPH 2019, a research team will present a new sample-based Monte Carlo denoising technique using a kernel-splatting network. Team members inlcude:
Miika Aittala, Massachusetts Institute of Technology
Fredo Durand, Massachusetts Institute of Technology
Michael Gharbi, Adobe, MIT
Jaakko Lehtinen, Aalto University, NVIDIA
Tzu-Mao Li, MIT Computer Science and Artificial Intelligence Laboratory

Image: 
2019, courtesy ACM SIGGRAPH 2019

Monte Carlo computational methods are behind many of the realistic images in games and movies. They automate the complexities in simulating the physics of lights and cameras to generate high-quality renderings from samples of diverse image features and scenes. But the process of Monte Carlo rendering is slow and can take hours -- or even days -- to produce a single image, and oftentimes the results are still pixelated, or "noisy."

A global team of computer scientists from MIT, Adobe, and Aalto University has developed an innovative method for producing higher-quality images and scene designs in much less time by using a deep-learning-based approach that considerably cuts the noise in images. Their method results in sharper images that effectively capture intricate details from sample features, including complex lighting components like shadowing, indirect lighting, motion blur, and depth of field.

The researchers are set to present their work at SIGGRAPH 2019, held 28 July-1 August in Los Angeles. This annual gathering showcases the world's leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

"Our algorithm can produce clean images from noisy input images with very few samples, and could be useful for producing quick rendered previews while iterating on scene design," says study lead author Michaël Gharbi, research scientist at Adobe. Gharbi began the research as a PhD student at MIT in the lab of Frédo Durand, who also is a coauthor.

The team's work focuses on so-called "denoising," a post-processing technique to reduce image noise in Monte Carlo rendering. It essentially retains the details of an image and removes anything that detracts from its sharpness. In previous work, computer scientists have developed methods that smooth the noise out by taking the average from the pixels in a sample image and neighboring pixels.

"This works reasonably well, and several movies have actually used this in production," notes coauthor Tzu-Mao Li, a recent PhD graduate from MIT who also studied under Durand. "However, if the images are too noisy, oftentimes the post-processing methods are not able to recover clean and sharp images. Usually users still need hundreds of samples per pixel on average for an image with reasonable quality -- a tedious, time-consuming process."

Somewhat comparable is the process of editing a photo in a graphics software program. If a user is not working from the original, raw file, altered versions of the photo will likely not result in a clear, sharp, high-res final image. A similar yet more complex problem is image denoising.

To this end, the researchers' new computational method involves working with the Monte Carlo samples directly, instead of average, noisy images where most information has already been lost. Unlike typical deep learning methods that deal with images or videos, the researchers demonstrate a new type of convolutional network that can learn to denoise renderings directly from the raw set of Monte Carlo samples rather than from the reduced, pixel-based representations.

A key part of their work is a novel kernel-predicting computational framework that "splats" individual samples -- colors and textures -- onto nearby pixels to sharpen the overall composition of the image. In traditional image processing, a kernel is used for blurring or sharpening. Splatting is a technique that addresses motion blur or depth-of-field issues and makes it easier to even out a pixelated area of a sample.

In this work, the team's splatting algorithm generates a 2D kernel for each sample, and "splats" the sample onto the image. "We argue that this is a more natural way to do the post-processing," says Li. The team trained their network using a random scene generator and extensively tested their method on a variety of realistic scenes, including various lighting scenarios such as indirect and direct illumination.

"Our method gives cleaner outputs at very low sample counts, where previous methods typically struggle," adds Gharbi.

In future work, the researchers intend to address scalability with their method to extend to more sample features and explore techniques to enforce frame-to-frame smoothness of the denoised images.

The paper, "Sample-based Monte Carlo Denoising Using a Kernel-Splatting Network," is also coauthored by Miika Aittala at MIT and Jaakko Lehtinen at Aalto University and Nvidia. For more details and a video, visit the team's project page.

Credit: 
Association for Computing Machinery

New research shows dramatic increase in Ontario teens visiting an ED for self-harm

OTTAWA, June 11, 2019 - Adolescents who intentionally harm themselves by poisoning or injuring themselves are at risk for repeated self-harm or suicide. A new CHEO and uOttawa study released today in the Canadian Journal of Psychiatry shows a dramatic increase in the number of Ontario adolescents who presented to an emergency department for self-harm between 2009 and 2017. Changing Rates of Self-Harm and Mental Disorders by Sex in Youths Presenting to Ontario Emergency Departments: Repeated Cross-Sectional Study looked at all the emergency department visits by Ontario adolescents aged 13 to 17 for self-harm or mental health concerns from 2003 to 2017, about 170,000 visits each year. We found something surprising: 2003 and 2009, the number of adolescents with a visit to the ED for self-harm fell by about a third. During the same period, ED visits for mental stayed about the same. Starting in 2009, however, things changed. From 2009 to 2017 the rates of adolescent self-harm visits more than doubled. Likewise, the rates of visits for mental health problems rose 78%. These increases were even greater among female adolescents.

"With the increased awareness in media and more generally about self-harm and mental health disorders, we wanted to see if there were any trends among adolescent emergency department visits in Ontario about these," Says Dr. William Gardner, Senior Scientist at the CHEO Research Institute, Professor of Epidemiology, uOttawa and Senior Research Chair, Child and Adolescent Psychiatry and lead author of the paper. "What we found is that yes, incidents of self-harm are increasing, and so are ED visits with mental health concerns. But I don't think any of the study team members expected such a dramatic rise from 2009 to 2017."

Data on emergency department visits were obtained from the Canadian National Ambulatory Care Reporting System database from the Canadian Institute for Health Information (CIHI), accessed through health administrative databases at ICES, which provides a secure and accessible array of Ontario's health-related data. There are approximately 840,000 adolescents aged 13 to 17 in Ontario, with 35% of them visiting an emergency department in any given year (140,000 or more). Of those visits, 5.6% have a mental health diagnostic code and 0.8% are for self-harm.

There are factors the authors believe warrant further research to shed light on the increases, including awareness campaigns about self-harm and mental health since, such as the Bell Let's Talk campaign, or familial stress as a result the 2008 economic crisis.

"Our data provide no evidence specifically but there were certain shifts that happened in 2009," says Gardner. "The iPhone was introduced in 2007 and the use of smart phones has increased a lot since then. Engagement with social media could lead to increased rates of self-harm, at least for vulnerable adolescents. This could happen in several ways: by normalizing self-harm, by triggering it, by getting teens to emulate self-harming peers, or by exposing youths to cyber-bullying. However, social media may also benefit some troubled adolescents. It can by provide them with a way to escape social isolation or find encouragement to seek treatment."

While emergency departments are often the first contact that many families have with the mental health system, authors stress that these departments are not an ideal setting for delivery of mental health care for of adolescents who have self-harmed or who are in crisis. Some emergency departments do not have clinicians who are trained to do mental health assessments.

"Community mental health follow-up after self-harm has been associated with reduced likelihood of repeat self-harm but evidence on whether youths can be successfully connected to mental health services from the emergency department is mixed," said Gardner. "Efforts should be made to increase the supply of and access to evidence-based treatments for adolescents who self-harm or have mental health problems. Where possible, it's important to deliver these services to youth before they need to come to an emergency department."

Gardner added, "What this study shows is that many more youths who need mental health care are showing up in Ontario's emergency departments. Unfortunately, the numbers of clinicians who can provide mental health care for adolescents haven't increased to meet the growing numbers of adolescents who need care. Sufficient treatment resources must be supplied to address increased demands for services."

Credit: 
Children's Hospital of Eastern Ontario Research Institute

Nanostructured diamond metalens for compact quantum technologies

image: By finding a certain kind of defect inside a block of diamond and fashioning a pattern of nanoscale pillars on the surface above it, the researchers can control the shape of individual photons emitted by the defect. Because those photons carry information about the spin state of an electron, such a system could be used as the basis for compact quantum technologies.

Image: 
Ann Sizemore Blevins

At the chemical level, diamonds are no more than carbon atoms aligned in a precise, three-dimensional (3D) crystal lattice. However, even a seemingly flawless diamond contains defects: spots in that lattice where a carbon atom is missing or has been replaced by something else. Some of these defects are highly desirable; they trap individual electrons that can absorb or emit light, causing the various colors found in diamond gemstones and, more importantly, creating a platform for diverse quantum technologies for advanced computing, secure communication and precision sensing.

Quantum technologies are based on units of quantum information known as "qubits." The spin of electrons are prime candidates to serve as qubits; unlike binary computing systems where data takes the form of only 0s or 1s, electron spin can represent information as 0, 1, or both simultaneously in a quantum superposition. Qubits from diamonds are of particular interest to quantum scientists because their quantum-mechanical properties, including superposition, exist at room temperature, unlike many other potential quantum resources.

The practical challenge of collecting information from a single atom deep inside a crystal is a daunting one, however. Penn Engineers addressed this problem in a recent study in which they devised a way to pattern the surface of a diamond that makes it easier to collect light from the defects inside. Called a metalens, this surface structure contains nanoscale features that bend and focus the light emitted by the defects, despite being effectively flat.

The research was led by Lee Bassett, Assistant Professor in the Department of Electrical and Systems Engineering, graduate student Tzu-Yung Huang, and postdoctoral researcher Richard Grote from Bassett's lab.

Additional Bassett Lab members David Hopper, Annemarie Exarhos and Garrett Kaighn contributed to the work, as did Gerald Lopez, director of Business Development at the Singh Center for Nanotechnology, and two members of Amsterdam's Center for Nanophotonics, Sander Mann and Erik Garnett.

The study was published in Nature Communications.

The key to harnessing the potential power of quantum systems is being able to create or find structures that allow electron spin to be reliably manipulated and measured, a difficult task considering the fragility of quantum states.

Bassett's lab approaches this challenge from a number of directions. Recently, the lab developed a quantum platform based on a two-dimensional (2D) material called hexagonal boron nitride which, due to its extremely thin dimensions, allows for easier access to electron spins. In the current study, the team returned to a 3D material that contains natural imperfections with great potential for controlling electron spins: diamonds.

Small defects in diamonds, called nitrogen-vacancy (NV) centers, are known to harbor electron spins that can be manipulated at room temperature, unlike many other quantum systems that demand temperatures approaching absolute zero. Each NV center emits light that provides information about the spin's quantum state.

Bassett explains why it is important to consider both 2D and 3D avenues in quantum technology:

"The different material platforms are at different levels of development, and they will ultimately be useful for different applications. Defects in 2D materials are ideally suited for proximity sensing on surfaces, and they might eventually be good for other applications, such as integrated quantum photonic devices," Bassett says. "Right now, however, the diamond NV center is simply the best platform around for room-temperature quantum information processing. It is also a leading candidate for building large-scale quantum communication networks."

So far, it has only been possible to achieve the combination of desirable quantum properties that are required for these demanding applications using NV centers embedded deep within bulk 3D crystals of diamond.

Unfortunately, those deeply embedded NV centers can be difficult to access since they are not right on the surface of the diamond. Collecting light from those hard-to-reach defects usually requires a bulky optical microscope in a highly controlled laboratory environment. Bassett's team wanted to find a better way to collect light from NV centers, a goal they were able to accomplish by designing a specialized metalens that circumvents the need for a large, expensive microscope.

"We used the concept of a metasurface to design and fabricate a structure on the surface of diamond that acts like a lens to collect photons from a single qubit in diamond and direct them into an optical fiber, whereas previously this required a large, free-space optical microscope," Bassett says. "This is a first key step in our larger effort to realize compact quantum devices that do not require a room full of electronics and free-space optical components."

Metasurfaces consist of intricate, nanoscale patterns that can achieve physical phenomena otherwise impossible at the macroscale. The researchers' metalens consists of a field of pillars, each 1 micrometer tall and 100-250 nanometers in diameter, arranged in such a way that they focus light like a traditional curved lens. Etched onto the surface of the diamond and aligned with one of the NV centers inside, the metalens guides the light that represents the electron's spin state directly into an optical fiber, streamlining the data collection process.

"The actual metalens is about 30 microns across, which is about the diameter of a piece of hair. If you look at the piece of diamond that we fabricated it on, you can't see it. At most, you could see a dark speckle," says Huang. "We typically think of lenses as focusing or collimating, but, with a metastructure, we have the freedom to design any kind of profile that we want. It affords us the freedom to tailor the emission pattern or the profile of a quantum emitter, like an NV center, which is not possible, or is very difficult, with free-space optics."

To design their metalens, Bassett, Huang and Grote had to assemble a team with a diverse array of knowledge, from quantum mechanics to electrical engineering to nanotechnology. Bassett credits the Singh Center for Nanotechnology as playing a critical role in their ability to physically construct the metalens.

"Nanofabrication was a key component of this project," says Bassett. "We needed to achieve high-resolution lithography and precise etching to fabricate an array of diamond nanopillars on length scales smaller than the wavelength of light. Diamond is a challenging material to process, and it was Richard's dedicated work in the Singh Center that enabled this capability. We were also lucky to benefit from the experienced cleanroom staff. Gerald helped us to develop the electron beam lithography techniques. We also had help from Meredith Metzler, the Thin Film Area Manager at the Singh Center, in developing the diamond etch."

Although nanofabrication comes with its challenges, the flexibility afforded by metasurface engineering provides important advantages for real-world applications of quantum technology:

"We decided to collimate the light from NV centers to go to an optical fiber, as it readily interfaces with other techniques that have been developed for compact fiber-optic technologies over the past decade," Huang says. "The compatibility with other photonic structures is also important. There might be other structures that you want to put on the diamond, and our metalens doesn't preclude those other optical enhancements."

This study is just one of many steps towards the goal of compacting quantum technology into more efficient systems. Bassett's lab plans to continue exploring how to best harness the quantum potential of 2D and 3D materials.

"The field of quantum engineering is advancing quickly now in large part due to the convergence of ideas and expertise from many disciplines including physics, materials science, photonics and electronics," Bassett says. "Penn Engineering excels in all these areas, so we are looking forward to many more advances in the future. Ultimately, we want to transition this technology out of the lab and into the real world where it can have an impact on our everyday lives."

Credit: 
University of Pennsylvania