rnlpword-embeddingtext2vec

How to resolve R Error using text2vec glove function: unused argument (grain_size = 100000)?


Trying to work through the text2vec vignette in the documentation and here to create word embeddings for some tweets:

head(twtdf$Tweet.content)
[1] "$NFLX $GS $INTC $YHOO $LVS\n$MSFT $HOG $QCOM $LUV $UAL\n$MLNX $UA $BIIB $GOOGL $GM $V\n$SKX $GE $CAT $MCD $AAL $SBUX"            
[2] "Good news frequent fliers. @AmericanAir says lower fares will be here for awhile"                   
[3] "Wall St. closing out the week with more earnings. What to watch:\n▶︎ $MCD\n▶︎ $AAL\n▶︎ $CAT\n"
[4] "Barrons loves $AAL at low multiple bc it's \"insanely profitable\". Someone tell them how cycles+ multiples work."               
[5] "These airlines are now offering in-flight Wi-Fi $DAL $AAL"          

Pretty much followed the guide as given:

library(text2vec)
require(text2vec)

twtdf <- read.csv("tweets.csv",header=T, stringsAsFactors = F)
twtdf$ID <- seq.int(nrow(twtdf))

tokens = twtdf$Tweet.content %>% tolower %>%  word_tokenizer
length(tokens)
it = itoken(tokens)
# create vocabulary
v = create_vocabulary(it) %>% 
  prune_vocabulary(term_count_min = 5)

# create co-occurrence vectorizer
vectorizer = vocab_vectorizer(v, grow_dtm = F, skip_grams_window = 5L)

#dtm <- create_dtm(it, vectorizer, grow_dtm = R)

it = itoken(tokens)
tcm = create_tcm(it, vectorizer)
glove_model = glove(tcm, word_vectors_size = 50, vocabulary = v, x_max = 10, learning_rate = .2)

fit(tcm, glove_model, n_iter = 15)

#when this was executed, R couldn't find the function
#fit <- GloVe(tcm = tcm, word_vectors_size = 50, x_max = 10, learning_rate = 0.2, num_iters = 15)

However, whenever I get to executing glove_model, I get the following error:

Error in .subset2(public_bind_env, "initialize")(...) : 
  unused argument (grain_size = 100000)
In addition: Warning message:
'glove' is deprecated.
Use 'GloVe' instead.

*I did try using GloVe instead, but I get the error that R can't find the function despite reinstalling the text2vec package and requireing it.

To check to make sure it wasn't some sort of formatting issue with my data, I tried running the code with the movie_review data and encounter the same problem. Just to be thorough, I additionally tried specifying the grain_size argument, but get the same error. I checked the issues on the Git repository and didn't see anything nor anything on this site or in an internet query.

Anyone else encounter this or is a new person problem?


Solution

  • Just use correct constructor for model : glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)

    glove() is old one from very old package version.