Refs:

http://www.ce.yildiz.edu.tr/personal/songul/file/1097/principal_components.pdf

https://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf

http://horicky.blogspot.pt/2009/11/principal-component-analysis.html

The tutorial shows the necessary steps to perform the dimension reduction of Principal Component Analysis (PCA)

Wikipedia: >Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.

PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. In this sense, PCA computes the most meaningful *basis* to express our data. Remember that a basis is a set of linearly independent vectors, that, in a linear combination, can represent every vector (they form a coordinate system).

One important fact: *PCA returns a new basis which is a linear combination of the original basis*. This limits the number of possible basisPCA can find.

So, if \(X\) is the original dataset, \(Y\) is the transformed dataset (both with size \(m\times n\)), and \(P\) is the linear transformation (\(m\times m\))

\[PX = Y\]

\(P\) can be seen as the matrix that transforms \(X\) in \(Y\), or as the geometrical transformation (rotation + stretch) that transforms \(X\) in \(Y\). The rows of \(P\) are the set of vectors that define the new basis for expressing the columns of \(X\). These row vectors, if properly defined, are the *principal components* of \(X\). For our datasets, a row of \(X\) is the set of measurements of a particular type, while a column of \(X\) are the set of measurements of a single observation.

Among all the possible new basis, PCA chooses one that reduce the redundacy of the data, ie, the one where the covariance between variables is as little as possible. That means a covariance matrix as near as a diagonal matrix as possible (all off-diagonal values as close to zero as possible).

For PCA, the basis vector with the largest variance is the most principal (the one that explains more variance from the dataset). This basis vector will be the first row of \(P\). The resulting ordered rows of \(P\) are the principal components.

The assumptions of PCA: + Linearity: the new basis is a linear combination of the original basis + Mean and variance are sufficient statistics: PCA assumes that these statistics totally describe the distribution of the data along the axis (ie, the normal distribution). + Large variances have important dynamics: high variance means signal, low variance means noise. This means that PCA implies that the dynamics has high SNR (signal to noise ratio). + The components are orthonormal

If some of these features is not appropriate, PCA might produce poor results.

Algebra details aside, we chose \(P\) to be **the matrix where each row is an eigenvector of \(XX^T\)**.

The covariance matrix of \(X\) is given by \(\frac{1}{n-1}XX^T\).

Let’s go thru the steps necessary to compute PCA of a given dataset:

```
library(stats) # use: cov()
# get some data:
x <- c(2.5,.5,2.2,1.9,3.1,2.3,2,1,1.5,1.1)
y <- c(2.4,0.7,2.9,2.2,3.0,2.7,1.6,1.1,1.6,.9)
plot(x,y,xlim=c(-1,4),ylim=c(-1,4)); abline(h=0,v=0,lty=3)
```

Each data sample (herein, the pairs (x,y)) is a \(n\)-dimensional vector (herein, \(n=2\)) in a orthonormal basis (so, the axis are perpendicular, which happens with the example since we are using the usual x,y cartesian axis).

For PCA to work properly, we must subtract the mean for each dimension this produces a data set whose “mean” is zero

```
x1 <- x - mean(x)
y1 <- y - mean(y)
plot(x1,y1); abline(h=0,v=0,lty=3)
```

The next step is to compute the covariance matrix (aka, dispersion matrix), i.e., a matrix whose element in the (i,j) position is the covariance between the ith and jth elements of a random vector (that is, of a vector of random variables).

```
m <- matrix(c(x1,y1),ncol=2) # make a matrix of the given data
m
```

```
## [,1] [,2]
## [1,] 0.69 0.49
## [2,] -1.31 -1.21
## [3,] 0.39 0.99
## [4,] 0.09 0.29
## [5,] 1.29 1.09
## [6,] 0.49 0.79
## [7,] 0.19 -0.31
## [8,] -0.81 -0.81
## [9,] -0.31 -0.31
## [10,] -0.71 -1.01
```

```
cov.m <- cov(m)
cov.m # notice that the non-diagonal values are both positive, ie, x&y increase together
```

```
## [,1] [,2]
## [1,] 0.6165556 0.6154444
## [2,] 0.6154444 0.7165556
```

Then we find the eigenvectors & eigenvalue of the covariance matrix. This will be the new basis vectors:

```
cov.eig <- eigen(cov.m)
cov.eig
```

```
## $values
## [1] 1.2840277 0.0490834
##
## $vectors
## [,1] [,2]
## [1,] 0.6778734 -0.7351787
## [2,] 0.7351787 0.6778734
```

`cov.eig$vectors[,1] %*% cov.eig$vectors[,2] # should equals zero since they are orthogonal between themselves`

```
## [,1]
## [1,] 0
```

```
# let's plot these eigenvectors onto the data to present the new basis
plot(x1,y1); abline(h=0,v=0,lty=3)
abline(a=0,b=(cov.eig$vectors[1,1]/cov.eig$vectors[2,1]),col="red")
abline(a=0,b=(cov.eig$vectors[1,2]/cov.eig$vectors[2,2]),col="green")
```

The first eigenvector (the red line) seems like a linear fit, showing us how it is related to the data but the other eigenvector does not seem that related to the data

If we look to the eigenvalues, the first is much larger than the second: the highest eigenvalue identifies the dataset’s principle component.

Once found the eigenvectors, we should order them decreasingly by their eigenvalues. This give us the components by order of significance! We can decide to ignore the components with less significance: we will lose information but not that much if their values are small.

So we start with a dataset of \(n\) dimensions, choose \(p\) components and get a new dataset with \(p\) dimensions representing the original dataset. The feature vector is the matrix of the eigenvectors we choose to keep.

This process of removing the less important axes can help reveal hidden, simplified dynamics in high dimensional data. This process is called *dimensional reduction*.

In our 2D eg we just have two options, (1) keep the first or (2) keep both that is:

```
f.vector1 <- as.matrix(cov.eig$vectors[,1],ncol=1) # feature vector with just one component
f.vector1
```

```
## [,1]
## [1,] 0.6778734
## [2,] 0.7351787
```

```
f.vector2 <- as.matrix(cov.eig$vectors[,c(1,2)],ncol=2) # feature vector with both components
f.vector2
```

```
## [,1] [,2]
## [1,] 0.6778734 -0.7351787
## [2,] 0.7351787 0.6778734
```

With our feature vector we can derive the new transformed dataset.

If \(M\) is the original dataset and \(F\) is the feature vector, then the transpose for the new dataset if given by \(F^T \times M^T\)

```
final1 <- t(f.vector1) %*% t(m) # new dataset for feature vector 1
final1
```

```
## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## [1,] 0.8279702 -1.77758 0.9921975 0.2742104 1.675801 0.9129491 -0.09910944
## [,8] [,9] [,10]
## [1,] -1.144572 -0.4380461 -1.223821
```

```
final2 <- t(f.vector2) %*% t(m) # new dataset for feature vector 2
final2
```

```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.8279702 -1.7775803 0.9921975 0.2742104 1.6758014 0.9129491
## [2,] -0.1751153 0.1428572 0.3843750 0.1304172 -0.2094985 0.1752824
## [,7] [,8] [,9] [,10]
## [1,] -0.09910944 -1.14457216 -0.43804614 -1.2238206
## [2,] -0.34982470 0.04641726 0.01776463 -0.1626753
```

```
# After the transformation, the data is decorrelated: the covariance between the variables is zero:
cov(t(final2))
```

```
## [,1] [,2]
## [1,] 1.284028e+00 7.391247e-17
## [2,] 7.391247e-17 4.908340e-02
```

These final datasets are the original data in term of the vectors we chose, ie, they are no longer over x,y axis, but use the chosen eigenvectors as their new axis.

```
# final1 as 1 dimension
t(final1)
```

```
## [,1]
## [1,] 0.82797019
## [2,] -1.77758033
## [3,] 0.99219749
## [4,] 0.27421042
## [5,] 1.67580142
## [6,] 0.91294910
## [7,] -0.09910944
## [8,] -1.14457216
## [9,] -0.43804614
## [10,] -1.22382056
```

```
# final2 as 2 dimensions, we can plot it:
plot(final2[1,],final2[2,],ylim=c(-2,2));abline(h=0,v=0,lty=3)
```

We can optionally recover the original data back, by 100% if we have chosen all components, or an approximation otherwise.

To do that, if \(M^'\) is the final dataset, and \(F\) is the feature vector, then the initial dataset is \((F \times M^')^T\):

```
# if we keep all eigenvectors, we can recover it by 100% (like in final2)
original.dataset2 <- t(f.vector2 %*% final2)
original.dataset2[,1] <- original.dataset2[,1] + mean(x) # re-add means
original.dataset2[,2] <- original.dataset2[,2] + mean(y)
original.dataset2
```

```
## [,1] [,2]
## [1,] 2.5 2.4
## [2,] 0.5 0.7
## [3,] 2.2 2.9
## [4,] 1.9 2.2
## [5,] 3.1 3.0
## [6,] 2.3 2.7
## [7,] 2.0 1.6
## [8,] 1.0 1.1
## [9,] 1.5 1.6
## [10,] 1.1 0.9
```

```
plot(original.dataset2[,1],original.dataset2[,2],xlim=c(-1,4),ylim=c(-1,4))
abline(h=0,v=0,lty=3)
```

```
# if we keep just some eigenvector (like final1), we do the same but cannot
# expect the original information, just some degraded version:
original.dataset1 <- t(f.vector1 %*% final1)
original.dataset1[,1] <- original.dataset1[,1] + mean(x) # re-add means
original.dataset1[,2] <- original.dataset1[,2] + mean(y)
original.dataset1
```

```
## [,1] [,2]
## [1,] 2.3712590 2.5187060
## [2,] 0.6050256 0.6031609
## [3,] 2.4825843 2.6394424
## [4,] 1.9958799 2.1115936
## [5,] 2.9459812 3.1420134
## [6,] 2.4288639 2.5811807
## [7,] 1.7428163 1.8371369
## [8,] 1.0341250 1.0685350
## [9,] 1.5130602 1.5879578
## [10,] 0.9804046 1.0102732
```

```
plot(original.dataset1[,1],original.dataset1[,2],xlim=c(-1,4),ylim=c(-1,4))
abline(h=0,v=0,lty=3)
```

Notice that in the approximation (`final1`

) the variation over the 2nd eigenvector is gone as expected (since it was previously erased).

Singular Vector Decomposition solves PCA. For a matrix \(M = U\times D \times V^T\), the principal components of \(M\) are given by the columns of the right singular vectors \(V\).

```
svd.m <- svd(scale(m))
svd.m$v
```

```
## [,1] [,2]
## [1,] -0.7071068 0.7071068
## [2,] -0.7071068 -0.7071068
```

```
pca.m <- prcomp(m,scale=TRUE)
pca.m$rotation
```

```
## PC1 PC2
## [1,] -0.7071068 0.7071068
## [2,] -0.7071068 -0.7071068
```

`prcomp()`

Library `stats`

includes function `prcomp()`

to perform PCA:

```
library(stats) # use: prcomp()
df = data.frame(x=x, y=y)
df
```

```
## x y
## 1 2.5 2.4
## 2 0.5 0.7
## 3 2.2 2.9
## 4 1.9 2.2
## 5 3.1 3.0
## 6 2.3 2.7
## 7 2.0 1.6
## 8 1.0 1.1
## 9 1.5 1.6
## 10 1.1 0.9
```

```
# prcomp() does the mean centering (option center=TRUE)
# also it scales the variables so that all have unit variance (scale=TRUE). This is necessary if the data has different units (it uses correlation matrix). In this case, the units are the same, and we like to have the same results as above (it uses covariance matrix):
pca.eg <- prcomp(df, scale=FALSE)
pca.eg # check the rotation attributes are equal to cov.eig above (except for the minus sign which is irrelevant)
```

```
## Standard deviations:
## [1] 1.1331495 0.2215477
##
## Rotation:
## PC1 PC2
## x -0.6778734 0.7351787
## y -0.7351787 -0.6778734
```

```
plot(x1,y1); abline(h=0,v=0,lty=3)
abline(a=0,b=(pca.eg$rotation[1,1]/pca.eg$rotation[2,1]),col="red")
abline(a=0,b=(pca.eg$rotation[1,2]/pca.eg$rotation[2,2]),col="green")
```

`summary(pca.eg)`

```
## Importance of components:
## PC1 PC2
## Standard deviation 1.1331 0.22155
## Proportion of Variance 0.9632 0.03682
## Cumulative Proportion 0.9632 1.00000
```

```
par(mfrow=c(1,2))
plot(pca.eg)
biplot(pca.eg) # samples are displayed as points, variables are displayed as vectors
```

```
par(mfrow=c(1,1))
# argument 'tol' receives a value indicating the magnitude below which components should be omitted. (Components are omitted if their standard deviations are less than or equal to tol times the standard deviation of the first component.)
prcomp(df, scale=TRUE, tol=.2)
```

```
## Standard deviations:
## [1] 1.387779
##
## Rotation:
## PC1
## x -0.7071068
## y -0.7071068
```

```
# to install:
# source("https://bioconductor.org/biocLite.R")
# biocLite("EBImage")
library("EBImage")
library("stats")
pic <- Image(flip(readImage("pansy.jpg")))
red.weigth <- .2989; green.weigth <- .587; blue.weigth <- 0.114
m <- red.weigth * imageData(pic)[,,1] + green.weigth * imageData(pic)[,,2] + blue.weigth * imageData(pic)[,,3]
image(m, col = grey(seq(0, 1, length = 256)))
```