machine-learningclassificationwekadocument-classification

Text classification with weka


I'm building a text classifier in java with Weka library.

First i remove stopwords, then I'm using a stemmer (e.g convert cars to car). Right now i have 6 predefined categories. I train the classifier on 5 documents for every category. The length of the documents are similar.

The results are ok when the text to be classified is short. But when the text is longer than 100 words the results getting stranger and stranger.

I return the probabilities for each category as following: Probability:

[0.0015560238056109177, 0.1808919321002592, 0.6657404531908249, 0.004793498469427115, 
0.13253647895234325, 0.014481613481534815] 

which is a pretty reliable classification.

But when I use texts longer than around 100 word I get results like:

Probability: [1.2863123678314889E-5, 4.3728547754744305E-5, 0.9964710903856974, 
5.539960514402068E-5, 0.002993481218084141, 4.234371196414616E-4]

Which is to good.

Right now Im using Naive Bayes Multinomial for classifying the documents. I have read about it and found out that i could act strange on longer text. Might be my problem right now?

Why is this happening?


Solution

  • There can be multiple factors for this behavior. If your training and test texts are not of the same domain, this can happen. Also, I believe adding more documents for every category should do some good. 5 documents in every category is seeming very less. If you do not have more training documents or it is difficult to have more training documents, then you can synthetically add positive and negative instances in your training set (see SMOTE algorithm in detail). Keep us posted the update.