I have a data set. the first 10 numbers are my features (one,two,...,ten) and the last column is my target (There are only 2 targets including MID and HIGH). The data is saved in a txt format (data.txt) like:
200000,400000,5000000,100000,5000000,50000,50000,300000,3333,1333,MID
200000,100000,500000,100000,5000000,5000,50000,300000,2000,1333,MID
100000,400000,5000000,100000,5000000,5000,50000,300000,2000,3333,MID
400000,200000,50000000,100000,5000000,5000,50000,300000,3333,3333,MID
200000,200000,5000000,100000,5000000,5000,50000,300000,3333,1333,HIGH
200000,100000,500000,10000000,5000000,50000,50000,300000,3333,3333,HIGH
100000,200000,500000,100000,5000000,50000,50000,300000,3333,666,HIGH
200000,100000,500000,1000000,5000000,50000,50000,300000,3333,666,HIGH
200000,100000,5000000,1000000,5000000,50000,5000,300000,3333,1333,HIGH
I have tried to implement LDA analysis based on the available tutorials. I also used StandardScaler for normalization because the unit of the columns nine and ten are different from the first 8 columns. Here is what I tried:
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import math
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.preprocessing import StandardScaler
df = pd.read_csv('data.txt', header=None)
df.columns=['one','two','three','four','five','six','seven','eight','nine','ten','class']
X = df.ix[:,0:10].values
y = df.ix[:,10].values
X_std = StandardScaler().fit_transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit_transform(X_std,y)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(8, 6))
for lab, col in zip(('MID', 'HIGH'),
('blue', 'red')):
plt.scatter(X_r2[y==lab, 0],
X_r2[y==lab, 1],
label=lab,s=100,
c=col)
plt.xlabel('LDA 1')
plt.ylabel('LDA 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.savefig('Results.png', format='png', dpi=1200)
plt.show()
I am getting this error:
line 32, in <module>X_r2[y==lab, 1],
IndexError: index 1 is out of bounds for axis 1 with size 1
Does anybody know how I can fix this problem? Thanks in advance for your help.
When your target variable has only two unique values, then the n_components generated by LDA would be only 1 even if you specify it as 2.
From Documentation:
n_components : int, optional
Number of components (< n_classes - 1) for dimensionality reduction.
Hence if you add one row for something like the following in your dataset,
200000,400000,5000000,100000,5000000,50000,50000,300000,3333,1333,LOW
code updated for one more category in y:
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from matplotlib import pyplot as plt
df.columns=['one','two','three','four','five','six','seven','eight','nine','ten','class']
X = df.ix[:,0:10].values
y = df.ix[:,10].values
X_std = StandardScaler().fit_transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit_transform(X_std,y)
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(8, 6))
for lab, col in zip(('MID', 'HIGH','LOW'),
('blue', 'red','green')):
plt.scatter(X_r2[y==lab, 0],
X_r2[y==lab, 1],
label=lab,s=100,
c=col)
plt.xlabel('LDA 1')
plt.ylabel('LDA 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.savefig('Results.png', format='png', dpi=1200)
plt.show()
Would generate the following plot!