I'm utilizing this answer in order to find the correlation coefficients greater than a given limit, f, in a matrix (ndarray) that is of shape (29421, 11001) [i.e. 29,421 rows and 11,001 columns].
I've adapted the code as follows (the random bit chooses one of the two columns to remove; additionally, the rows corresponding to the linked answer have "###" after them):
PROBLEM: I'm getting thousands of correlation coefficients larger than 1... Which from my understanding, shouldn't happen?
rand = random()
rows = dataset_normalized.shape[0] ###
print("Rows: " + str(dataset_normalized.shape[0]) + ", Columns: " + str(dataset_normalized.shape[1]))
ms = dataset_normalized.mean(axis=1)[(slice(None, None, None), None)] ###
datam = dataset_normalized - ms ###
datass = np.sqrt(scipy.stats.ss(datam, axis=1)) ###
correlations = {}
percent_rand_one = 0
percent_rand_zero = 0
for i in range(rows): ###
if(0 in datass[i:] or datass[i] == 0):
continue
else:
temp = np.dot(datam[i:], datam[i].T) ###
rs = temp / (datass[i:] * datass[i]) ###
for counter, corr in enumerate(rs):
if(corr > 1 or corr < -1):
# ERROR IS HERE: This is printing right now,
# a lot, so I'm not sure what's happening?
print("Correlation of " + str(corr) + " on " + str(i) + " and " + str(counter) + ".")
print("Something went wrong. Correlations calculated were either above 1 or below -1.")
elif(corr > f or corr < f):
rand_int = randint(1, 100)
if(rand_int > 50):
correlations[counter] = corr
percent_rand_one += 1
else:
correlations[i] = corr
percent_rand_zero += 1
Any advice or thoughts?
Figured it out...and this is the weirdest thing. I just needed to flip the axis.
# Create correlations.
dataset_normalized_switched = np.swapaxes(dataset_normalized, 0, 1)
columns = dataset_normalized_switched.shape[0] ### This is the major change...
ms = dataset_normalized_switched.mean(axis=1)[(slice(None, None, None), None)]
datam = dataset_normalized_switched - ms
datass = np.sqrt(scipy.stats.ss(datam, axis=1))
correlations = {}
for i in range(columns):
temp = np.dot(datam[i:], datam[i].T)
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
rs = temp / (datass[i:] * datass[i])
correlations[i] = [(index + i) for index, value in enumerate(rs) if (index != 0 and abs(value) < 1.1 and abs(value) > f)]