pythonscikit-learnnltktext-classificationnltk-trainer

How to classify new data using a pre-trained model - Python Text Classification (NLTK and Scikit)


I am very new to Text Classification and I am trying to classify each line of a dataset composed by twitter comments according to some pre-defined topics.

I have used the code bellow in Jupyter Notebook to build and train a model with a training dataset. I chose to use a supervised approach in python with NLTK and Scikit, as unsupervised ones (like LDA) were not giving me good results.

I followed these steps so far:

  1. Mannually categorised the topics of a training dataset;
  2. Applied the training dataset to the code bellow and trained it resulting in an accuracy of aprox. 82%.

Now, I want to use this model to automatically categorise the topics of another dataset (i.e., my test dataset). Most posts only cover the training part, so it's quite frustraiting for a newcommer to understand how to get the trained model and actually use it.

Hence, the question is: with the code below, how can I now use the trained model to classify a new dataset?

I appreciate your help.

This tutorial is very good, and I used it as a reference for the code below: https://medium.com/@ishan16.d/text-classification-in-python-with-scikit-learn-and-nltk-891aa2d0ac4b

My model building and training code:

#Do library and methods import

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from nltk.tokenize import RegexpTokenizer
from nltk import WordNetLemmatizer
from nltk.stem import PorterStemmer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import nltk as nltk
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from sklearn.naive_bayes import MultinomialNB
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import regex as re
import requests


# Import dataset

df = pd.read_csv(r'C:\Users\user_name\Downloads\Train_data.csv', delimiter=';')


# Tokenize

def tokenize(x):
 tokenizer = RegexpTokenizer(r'\w+')
 return tokenizer.tokenize(x)
df['tokens'] = df['Tweet'].map(tokenize)


# Stem and Lemmatize

nltk.download('wordnet')
nltk.download('omw-1.4')

def stemmer(x):
 stemmer = PorterStemmer()
 return ' '.join([stemmer.stem(word) for word in x])
 
def lemmatize(x):
 lemmatizer = WordNetLemmatizer()
 return ' '.join([lemmatizer.lemmatize(word) for word in x])
df['lemma'] = df['tokens'].map(lemmatize)
df['stems'] = df['tokens'].map(stemmer)


# set up feature matrix and target column

X = df['lemma']
y = df['Topic']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 13)


# Create out pipeline with a vectorizer and our naive Bayes classifier

pipe_mnnb = Pipeline(steps = [('tf', TfidfVectorizer()), ('mnnb', MultinomialNB())])


# Create parameter grid

pgrid_mnnb = {
 'tf__max_features' : [1000, 2000, 3000],
 'tf__stop_words' : ['english', None],
 'tf__ngram_range' : [(1,1),(1,2)],
 'tf__use_idf' : [True, False],
 'mnnb__alpha' : [0.1, 0.5, 1]
}


# Set up the grid search and fit the model

gs_mnnb = GridSearchCV(pipe_mnnb,pgrid_mnnb,cv=5,n_jobs=-1)
gs_mnnb.fit(X_train, y_train)


# Check the score

gs_mnnb.score(X_train, y_train)
gs_mnnb.score(X_test, y_test)


# Check the parameters

gs_mnnb.best_params_


# Get predictions

preds_mnnb = gs_mnnb.predict(X)
df['preds'] = preds_mnnb


# Print resulting dataset

print(df.shape)
df.head()

Solution

  • It seems than after training you just have to do as for your validation step using directly the grid-searcher, which in sklearn library is also used after training as a model taking the best found hyperparameters. So take a X which is what you want to evaluate and run

    preds_mnnb = gs_mnnb.predict(X)
    

    preds_mnnb should contain what you expect