I am using Lime to trace the behavior behind why the model take his decision to predict if this sentence is (NEG, POS or NEUTRAL) and for the most of cases lime explain correctly but in case like this why i entered NEG sentence, the model predict it as NEUTRAL but Lime visualize it with NEG highest percentage, so why i got logical error like this?
You are not providing a lot of details, so my answer is going to be similarly general: You original model is making a wrong prediction. Then lime is making a linear approximation of the model. Because of the approximative nature of the linear model, this is not exactly as the original model and deviates from the original model. In your case the original model gives a wrong prediction and the deviation of the linear approximation is - by chance - in the direction of the right answer, so that you get - by chance - the right answer from the approximation although the original model was wrong.