I was wondering if anyone here has ever tried to visualize a multidimensional tensor in numpy. If so, could you share with me how I might go about doing this? I was thinking of reducing it to a 2D visualization.
I've included some sample output. It's weirdly structured, there are ellipses "..." and it's got a 4D tensor layout [[[[ content here]]]]
Sample Data:
[[[[ -9.37186633e-05 -9.89684777e-05 -8.97786958e-05 ...,
-1.08984910e-04 -1.07056971e-04 -8.68257193e-05]
[[ -9.61350961e-05 -8.75062251e-05 -9.39425736e-05 ...,
-1.17737654e-04 -9.66376538e-05 -8.78447026e-05]
[ -1.06558400e-04 -9.04031331e-05 -1.04479543e-04 ...,
-1.02786013e-04 -1.07974607e-04 -1.07524407e-04]]
[[[ -1.09648725e-04 -1.01073667e-04 -9.39013553e-05 ...,
-8.94383265e-05 -9.06078858e-05 -9.83356076e-05]
[ -9.76310257e-05 -1.04029998e-04 -1.01905476e-04 ...,
-9.50643880e-05 -8.29156561e-05 -9.75912480e-05]]]
[ -1.12038200e-04 -1.00154917e-04 -9.00980813e-05 ...,
-1.10244124e-04 -1.16597665e-04 -1.10604939e-04]]]]
For plotting high dimensional data there is a technique called as T-SNE
T-SNE is provided by tensorflow as a tesnorboard feature
You can just provide the tensor as an embedding and run tensorboard
You can visualize high dimensional data in either 3D or 2d
Here is a link for Data Visualization using Tensor-board: https://github.com/jayshah19949596/Tensorboard-Visualization-Freezing-Graph
Your code should be something like this :
tensor_x = tf.Variable(mnist.test.images, name='images')
config = projector.ProjectorConfig()
# One can add multiple embeddings.
embedding = config.embeddings.add()
embedding.tensor_name = tensor_x.name
# Link this tensor to its metadata file (e.g. labels).
embedding.metadata_path = metadata
# Saves a config file that TensorBoard will read during startup.
projector.visualize_embeddings(tf.summary.FileWriter(logs_path), config)
You can use scikit learn's TSNE to plot high dimensional data
Below is sample coede to use scikit learn's TSNE
# x is my data which is a nd-array
# You have to convert your tensor to nd-array before using scikit-learn's tsne
# Convert your tensor to x =====> x = tf.Session().run(tensor_x)
standard = StandardScaler()
x_std = standard.fit_transform(x)
plt.figure()
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(y)
tsne = TSNE(n_components=2, random_state=0) # n_components means you mean to plot your dimensional data to 2D
x_test_2d = tsne.fit_transform(x_std)
print()
markers = ('s', 'd', 'o', '^', 'v', '8', 's', 'p', "_", '2')
color_map = {0: 'red', 1: 'blue', 2: 'lightgreen', 3: 'purple', 4: 'cyan', 5: 'black', 6: 'yellow', 7: 'magenta',
8: 'plum', 9: 'yellowgreen'}
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=x_test_2d[y == cl, 0], y=x_test_2d[y == cl, 1], c=color_map[idx], marker=markers[idx],
label=cl)
plt.xlabel('X in t-SNE')
plt.ylabel('Y in t-SNE')
plt.legend(loc='upper left')
plt.title('t-SNE visualization of test data')
plt.show()
You can also use PCA for plotting high dimensional data to 2D
Here is implementation of PCA.
Scikit Learn PCA: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html