mathjax

Friday, September 11, 2015

Style = Correlation?

I found this paper http://arxiv.org/pdf/1508.06576v2.pdf very clear and well-written. People quickly released its implementation such as https://github.com/kaishengtai/neuralart.

The major contribution of the paper seems to be the content/style decomposition of an image (as described in Equation 1, 5 and 7). The style representation (equation 4) seems to be the key. 

It seems obvious that the “content" and “style” defined in this paper are different from what we know by common sense. The “content” is actually a mixture of both content and style, since it was defined as activations at a certain layer depth in an object-recognition ConvNet, which has no guarantee that neurons won’t be activated by “style” features. Also, the “style” was defined as correlations among feature activation patterns which, for the same reason,  could encompass both content and style. 

One of the imaginary (could be wrong) examples of the “style" in my mind would be “starry(style) —correlated with-- sky(content)”. So when you minimize loss function of “style” using gradient decent, you are more likely to make areas starry wherever they look like sky. And the level of starry could be adjusted by weights between content and style.

I interpret the author’s main idea as: To transform an image from X(input) to Y(output), by preserving original features while introducing new features which are highly correlated in image Z(famous painting), then balance the weights between original and new features. I know practically the authors started with a white noise image and minimized its distances from both X and Z’s representations, but the ideas are equivalent.


Potential applications of this paper's idea could be interesting too. Such as to make camouflage given environmental figures; add/remove accents of human voices; translate scientific papers into novels; translate novels into Sci Fi novels; etc.

Wednesday, May 8, 2013

An intuitive explanation of PCA (Principal Component Analysis)

Many research papers apply PCA (Principal Component Analysis) to their data and present results to readers without further explanation of the method. When people search on the internet for a definition of PCA, they sometimes get confused, often by terms like "covariance matrix", "eigenvectors" or "eigenvalues". It is not surprising because most explanatory articles focus on detailed calculation process instead of the basic idea of PCA. They are mathematically correct, yet often not intuitively readable at first glance.

For a mathematical method, I believe most people only need to understand the logic and limitations of it and let software packages to do the rest (implementation, calculation, etc.). Here I am  trying to explain that PCA is not an impenetrable soup of acronyms but a quite intuitive approach. It can make large-scale data "smaller" and easier to handle.

Tuesday, March 19, 2013

How much confidence can we obtain from a piece of evidence?

Every one knows that most computational predictions on biological systems are "less confidential" although many methods were based on convincible evidences and were announced of having high accuracy. But why these theoretically applicable methods failed to give reliable outputs? Some people believe that it's simply because existing methods are not good enough, or the biological system is too complex to predict. That's probably true. But there's another important factor that was often ignored, the abundance of potential true positives.

I will use a simple disease diagnosis example to show that even if we have a excellent prediction method based on strong evidences, we might still get poor predictions as long as the disease is rare in population.

Wednesday, December 19, 2012

Simple Enrichment Test -- calculate hypergeometric p-values in R

Hypergeometric test are useful for enrichment analysis. For example, having a gene list in hand, people might want to tell which functions (GO terms) are enriched among these genes. Hypergeometric test (or its equivalent: one-tailed Fisher's exact test) will give you statistical confidence in \(p\)-values.

R software provids function phyper and fisher.test for Hypergeometric and Fisher's exact test accordingly. However, it is tricky to get it right. I spent some time to make it clear.