pythonmachine-learningstatsmodelsnon-linear-regressionpoisson

How can we calculate mean absolute error (MAE) for zero-inflated Poisson regression and zero-inflated negative binomial regression?


I am trying to use Python to calculate mean absolute error (MAE) while doing zero-inflated Poisson regression and zero-inflated negative binomial regression. I separated data into training data and testing data. I use code below but it does not work:

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
import statsmodels.api as sm
import statsmodels.formula.api as smf
import tensorflow as tf
df = pd.read_excel('....', sheet_name='Sheet1')
print(df.head())
X = df[['a', 'b', 'c', 'd', 'e', 'f']]
y = df['g']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

from statsmodels.discrete.count_model import ZeroInflatedPoisson
y_zip = y_train.values

y_zip_test = y_test.values

X_count =  X_train.values # Predictors for count part
X_zero = X_train.values  # Predictors for zero-inflation part

X_count_test = X_test.values
X_zero_test = X_test.values

# Add a constant for the intercept
X_count = sm.add_constant(X_count)
X_zero = sm.add_constant(X_zero)

# Fit the ZIP model
zip_model = ZeroInflatedPoisson(endog=y_zip, exog=X_count, exog_infl=X_zero, inflation='logit')
zip_model_fit = zip_model.fit()
print(zip_model_fit.summary())


# Make predictions
y_pred = zip_model_fit.predict(X_count_test)

# Calculate MAE
mae = np.mean(np.abs(y_zip_test - y_pred))
print(f'Mean Absolute Error: {mae}')

The results is below

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[3], line 33
     29 print(zip_model_fit.summary())
     32 # Make predictions
---> 33 y_pred = zip_model_fit.predict(X_count_test)
     35 # Calculate MAE của test
     36 mae = np.mean(np.abs(y_zip_test - y_pred))

File ~\anaconda3\envs\tf\lib\site-packages\statsmodels\base\model.py:1174, in Results.predict(self, exog, transform, *args, **kwargs)
   1127 """
   1128 Call self.model.predict with self.params as the first argument.
   1129 
   (...)
   1169 returned prediction.
   1170 """
   1171 exog, exog_index = self._transform_predict_exog(exog,
   1172                                                 transform=transform)
-> 1174 predict_results = self.model.predict(self.params, exog, *args,
   1175                                      **kwargs)
   1177 if exog_index is not None and not hasattr(predict_results,
   1178                                           'predicted_values'):
   1179     if predict_results.ndim == 1:

File ~\anaconda3\envs\tf\lib\site-packages\statsmodels\discrete\count_model.py:453, in GenericZeroInflated.predict(self, params, exog, exog_infl, exposure, offset, which, y_values)
    449 params_main = params[self.k_inflate:]
    451 prob_main = 1 - self.model_infl.predict(params_infl, exog_infl)
--> 453 lin_pred = np.dot(exog, params_main[:self.exog.shape[1]]) + exposure + offset
    455 # Refactor: This is pretty hacky,
    456 # there should be an appropriate predict method in model_main
    457 # this is just prob(y=0 | model_main)
    458 tmp_exog = self.model_main.exog

ValueError: shapes (21,6) and (7,) not aligned: 6 (dim 1) != 7 (dim 0)

The error can be solve by using dataframe. y_train, X_train = dmatrices(expr, recreated_train_data, return_type='dataframe')

However, I meet problems that model do not converge with information below:

C:\Users\Admin\anaconda3\envs\tf\lib\site-packages\scipy\optimize\_optimize.py:1291: OptimizeWarning: Maximum number of iterations has been exceeded.
  res = _minimize_bfgs(f, x0, args, fprime, callback=callback, **opts)
C:\Users\Admin\anaconda3\envs\tf\lib\site-packages\statsmodels\base\model.py:607: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  warnings.warn("Maximum Likelihood optimization failed to "

How to resolve this error?


Solution

  • You do the following step to your training data.

    # Add a constant for the intercept
    X_count = sm.add_constant(X_count)
    X_zero = sm.add_constant(X_zero)
    

    However, you do not do it to your testing data. I believe that can be your problem, as the dimensions are one off according to your error.