pythongensimword2vectsne

How to plot tsne on word2vec (created from gensim) for the most_similar 20 cases?


I am using TSNE to plot a trained word2vec model (created from gensim):

labels = []
tokens = []

for word in model.wv.vocab:
    tokens.append(model[word])
    labels.append(word)

tsne_model = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23)
new_values = tsne_model.fit_transform(tokens)

x = []
y = []
for value in new_values:
    x.append(value[0])
    y.append(value[1])
    
plt.figure(figsize=(50, 50)) 
for i in range(len(x)):
    plt.scatter(x[i],y[i])
    plt.annotate(labels[i],
                 xy=(x[i], y[i]),
                 xytext=(5, 2),
                 textcoords='offset points',
                 ha='right',
                 va='bottom')
plt.show()

Like as the inbuilt gensim method 'most_similar', per ex.

w2v_model.wv.most_similar(postive=['word'], topn=20)

will output 20 of the most similar words to 'word', I will like to plot only the most similar words (n=20) of a given word. Any advice on how to modify the plot to do that?


Solution

  • Using an example from the package:

    from gensim.test.utils import common_texts
    from gensim.models import Word2Vec
    from sklearn.manifold import TSNE
    import matplotlib.pyplot as plt
    
    model = Word2Vec(sentences=common_texts, window=5, min_count=1)
    
    labels = [i for i in model.wv.vocab.keys()]
    tokens = model[labels]
    
    tsne_model = TSNE(init='pca',learning_rate='auto')
    new_values = tsne_model.fit_transform(tokens)
    

    tsne will look something like this:

    plt.figure(figsize=(7, 5)) 
    for i in range(new_values.shape[0]):
        plt.scatter(x[i],y[i])
        plt.annotate(labels[i],
                     xy=(x[i], y[i]),
                     xytext=(5, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')
    

    enter image description here

    Extract most similar for 'trees' (5 in my case) :

    most_sim_words = [i[0] for i in model.wv.most_similar(positive='trees', topn=5)]
    most_sim_words
    ['human', 'graph', 'time', 'interface', 'system']
    

    You can use code you have, just iterating through the most common words, and using index() to get their index in tokens :

    for word in most_sim_words:
        i = labels.index(word)
        plt.scatter(x[i],y[i])
        plt.annotate(labels[i],
                     xy=(x[i], y[i]),
                     xytext=(5, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')
    

    enter image description here