pythonmachine-learningscikit-learnpcadimensionality-reduction

Plot PCA loadings and loading in biplot in sklearn (like R's autoplot)


I saw this tutorial in R w/ autoplot. They plotted the loadings and loading labels:

autoplot(prcomp(df), data = iris, colour = 'Species',
         loadings = TRUE, loadings.colour = 'blue',
         loadings.label = TRUE, loadings.label.size = 3)

enter image description here https://cran.r-project.org/web/packages/ggfortify/vignettes/plot_pca.html

I prefer Python 3 w/ matplotlib, scikit-learn, and pandas for my data analysis. However, I don't know how to add these on?

How can you plot these vectors w/ matplotlib?

I've been reading Recovering features names of explained_variance_ratio_ in PCA with sklearn but haven't figured it out yet

Here's how I plot it in Python

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn import decomposition
import seaborn as sns; sns.set_style("whitegrid", {'axes.grid' : False})

%matplotlib inline
np.random.seed(0)

# Iris dataset
DF_data = pd.DataFrame(load_iris().data, 
                       index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
                       columns = load_iris().feature_names)

Se_targets = pd.Series(load_iris().target, 
                       index = ["iris_%d" % i for i in range(load_iris().data.shape[0])], 
                       name = "Species")

# Scaling mean = 0, var = 1
DF_standard = pd.DataFrame(StandardScaler().fit_transform(DF_data), 
                           index = DF_data.index,
                           columns = DF_data.columns)

# Sklearn for Principal Componenet Analysis
# Dims
m = DF_standard.shape[1]
K = 2

# PCA (How I tend to set it up)
Mod_PCA = decomposition.PCA(n_components=m)
DF_PCA = pd.DataFrame(Mod_PCA.fit_transform(DF_standard), 
                      columns=["PC%d" % k for k in range(1,m + 1)]).iloc[:,:K]
# Color classes
color_list = [{0:"r",1:"g",2:"b"}[x] for x in Se_targets]

fig, ax = plt.subplots()
ax.scatter(x=DF_PCA["PC1"], y=DF_PCA["PC2"], color=color_list)

enter image description here


Solution

  • Try the PCA library. It works well with Pandas objects (without necessitating it).

    First install the package:

    pip install pca
    

    The following will plot the explained variance, a scatter plot, and a biplot.

    from pca import pca
    import pandas as pd
    
    ###########################################################
    # SETUP DATA
    ###########################################################
    # Load sample data, represent the data as a pd.DataFrame
    from sklearn.datasets import load_iris
    iris = load_iris()
    X = pd.DataFrame(data=iris.data, 
                     columns=iris.feature_names)
    X.columns = ["sepal_length", "sepal_width", "petal_length", "petal_width"]
    y = pd.Categorical.from_codes(iris.target,
                                  iris.target_names)
    
    ###########################################################
    # COMPUTE AND VISUALIZE PCA
    ###########################################################
    # Initialize the PCA, either reduce the data to the number of
    # principal components that explain 95% of the total variance...
    model = pca(n_components=0.95)
    # ... or explicitly specify the number of PCs
    model = pca(n_components=2)
    
    # Fit and transform
    results = model.fit_transform(X=X, row_labels=y)
    
    # Plot the explained variance
    fig, ax = model.plot()
    
    # Scatter the first two PCs
    fig, ax = model.scatter()
    
    # Create a biplot
    fig, ax = model.biplot(n_feat=4)
    

    The standard biplot will look similar to this.

    enter image description here