pythonencodingnlpword-embeddingglove

Encoding problem while training my own Glove model


I am training a GloVe model with my own corpus and I have troubles to save it/load it in an utf-8 format.

Here what I tried:

from glove import Corpus, Glove

#data
lines = [['woman', 'umbrella', 'silhouetted'], ['person', 'black', 'umbrella']]

#GloVe training
corpus = Corpus() 
corpus.fit(lines, window=4)
glove = Glove(no_components=4, learning_rate=0.1)
glove.fit(corpus.matrix, epochs=10, no_threads=8, verbose=True)
glove.add_dictionary(corpus.dictionary)
glove.save('glove.model.txt')

The saved file glove.model.txt is unreadable and I can't succeed to save it with a utf-8 encoding.

When I try to read it, for exemple by converting it in a Word2Vec format:

from gensim.models.keyedvectors import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
glove2word2vec(glove_input_file="glove.model.txt", 
word2vec_output_file="gensim_glove_vectors.txt")    

model = KeyedVectors.load_word2vec_format("gensim_glove_vectors.txt", binary=False)

I have the following error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte

Any idea on how I could use my own GloVe model ?


Solution

  • I just found myself a way to save the data with an utf-8 format, I'm sharing it here in case someone faces the same problem

    Instead of using the glove saving method glove.save('glove.model.txt') try to simulate by yourself a glove record:

    with open("results_glove.txt", "w") as f:
        for word in glove.dictionary:
            f.write(word)
            f.write(" ")
            for i in range(0, vector_size):
                f.write(str(glove.word_vectors[glove.dictionary[word]][i]))
                f.write(" ")
            f.write("\n")
    

    Then you will be able to read it.