machine-learningscikit-learnnaivebayes

How to calculate feature_log_prob_ in the naive_bayes MultinomialNB


Here's my code:

# Load libraries
import numpy as np
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
# Create text
text_data = np.array(['Tim is smart!',
                      'Joy is the best',
                      'Lisa is dumb',
                      'Fred is lazy',
                      'Lisa is lazy'])
# Create target vector
y = np.array([1,1,0,0,0])
# Create bag of words
count = CountVectorizer()
bag_of_words = count.fit_transform(text_data)    # 

# Create feature matrix
X = bag_of_words.toarray()

mnb = MultinomialNB(alpha = 1, fit_prior = True, class_prior = None)
mnb.fit(X,y)

print(count.get_feature_names())
# output:['best', 'dumb', 'fred', 'is', 'joy', 'lazy', 'lisa', 'smart', 'the', 'tim']


print(mnb.feature_log_prob_) 
# output 
[[-2.94443898 -2.2512918  -2.2512918  -1.55814462 -2.94443898 -1.84582669
  -1.84582669 -2.94443898 -2.94443898 -2.94443898]
 [-2.14006616 -2.83321334 -2.83321334 -1.73460106 -2.14006616 -2.83321334
  -2.83321334 -2.14006616 -2.14006616 -2.14006616]]

My question is:
Let's say for word: "best": the probability for class 1 : -2.14006616.
What is the formula to calculate to get this score.

I am using LOG (P(best|y=class=1)) -> Log(1/2) -> can't get the -2.14006616


Solution

  • From the documentation we can infer that feature_log_prob_ corresponds to the empirical log probability of features given a class. Let's take an example feature "best" for the purpose of this illustration, the log probability of this feature for class 1 is -2.14006616 (as you pointed out), now if we were to convert it into actual probability score it will be np.exp(1)**-2.14006616 = 0.11764. Let's take one more step back to see how and why the probability of "best" in class 1 is 0.11764. As per the documentation of Multinomial Naive Bayes, we see that these probabilities are computed using the formula below:

    enter image description here

    Where, the numerator roughly corresponds to the number of times feature "best" appears in the class 1 (which is of our interest in this example) in the training set, and the denominator corresponds to the total count of all features for class 1. Also, we add a small smoothing value, alpha to prevent from the probabilities going to zero and n corresponds to the total number of features i.e. size of vocabulary. Computing these numbers for the example we have,

    N_yi = 1  # "best" appears only once in class `1`
    N_y = 7   # There are total 7 features (count of all words) in class `1`
    alpha = 1 # default value as per sklearn
    n = 10    # size of vocabulary
    
    Required_probability = (1+1)/(7+1*10) = 0.11764
    

    You can do the math in a similar fashion for any given feature and class.