python-3.xscikit-learnk-meansgrid-searchgridsearchcv

K-Means GridSearchCV hyperparameter tuning


I am trying to perform hyperparameter tuning for Spatio-Temporal K-Means clustering by using it in a pipeline with a Decision Tree classifier. The idea is to use K-Means clustering algorithm to generate cluster-distance space matrix and clustered labels which will be then passed to Decision Tree classifier. For hyperparameter tuning, just use parameters for K-Means algorithm.

I am using Python 3.8 and sklearn 0.22.

The data I am interested is having 3 columns/attributes: 'time', 'x' and 'y' (x and y are spatial coordinates).

The code is:

class ST_KMeans(BaseEstimator, TransformerMixin):
# class ST_KMeans():
    """
    Note that K-means clustering algorithm is designed for Euclidean distances.
    It may stop converging with other distances, when the mean is no longer a
    best estimation for the cluster 'center'.

    The 'mean' minimizes squared differences (or, squared Euclidean distance).
    If you want a different distance function, you need to replace the mean with
    an appropriate center estimation.


    Parameters:

    k:  number of clusters
    
    eps1 : float, default=0.5
        The spatial density threshold (maximum spatial distance) between 
        two points to be considered related.

    eps2 : float, default=10
        The temporal threshold (maximum temporal distance) between two 
        points to be considered related.

    metric : string default='euclidean'
        The used distance metric - more options are
        ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’,
        ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘jensenshannon’,
        ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘rogerstanimoto’, ‘sqeuclidean’,
        ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘yule’.
    
    n_jobs : int or None, default=-1
        The number of processes to start; -1 means use all processors (BE AWARE)


    Attributes:
    
    labels : array, shape = [n_samples]
        Cluster labels for the data - noise is defined as -1
    """

    def __init__(self, k, eps1 = 0.5, eps2 = 10, metric = 'euclidean', n_jobs = 1):
        self.k = k
        self.eps1 = eps1
        self.eps2 = eps2
        # self.min_samples = min_samples
        self.metric = metric
        self.n_jobs = n_jobs


    def fit(self, X, Y = None):
        """
        Apply the ST K-Means algorithm 
        
        X : 2D numpy array. The first attribute of the array should be time attribute
            as float. The following positions in the array are treated as spatial
            coordinates.
            The structure should look like this [[time_step1, x, y], [time_step2, x, y]..]
            
            For example 2D dataset:
            array([[0,0.45,0.43],
            [0,0.54,0.34],...])


        Returns:

        self
        """
        
        # check if input is correct
        X = check_array(X)

        # type(X)
        # numpy.ndarray

        # Check arguments for DBSCAN algo-
        if not self.eps1 > 0.0 or not self.eps2 > 0.0:
            raise ValueError('eps1, eps2, minPts must be positive')

        # Get dimensions of 'X'-
        # n - number of rows
        # m - number of attributes/columns-
        n, m = X.shape


        # Compute sqaured form Euclidean Distance Matrix for 'time' and spatial attributes-
        time_dist = squareform(pdist(X[:, 0].reshape(n, 1), metric = self.metric))
        euc_dist = squareform(pdist(X[:, 1:], metric = self.metric))

        '''
        Filter the euclidean distance matrix using time distance matrix. The code snippet gets all the
        indices of the 'time_dist' matrix in which the time distance is smaller than 'eps2'.
        Afterward, for the same indices in the euclidean distance matrix the 'eps1' is doubled which results
        in the fact that the indices are not considered during clustering - as they are bigger than 'eps1'.
        '''
        # filter 'euc_dist' matrix using 'time_dist' matrix-
        dist = np.where(time_dist <= self.eps2, euc_dist, 2 * self.eps1)


        # Initialize K-Means clustering model-
        self.kmeans_clust_model = KMeans(
            n_clusters = self.k, init = 'k-means++',
            n_init = 10, max_iter = 300,
            precompute_distances = 'auto', algorithm = 'auto')

        # Train model-
        self.kmeans_clust_model.fit(dist)


        self.labels = self.kmeans_clust_model.labels_
        self.X_transformed = self.kmeans_clust_model.fit_transform(X)

        return self


    def transform(self, X):
        if not isinstance(X, np.ndarray):
            # Convert to numpy array-
            X = X.values

        # Get dimensions of 'X'-
        # n - number of rows
        # m - number of attributes/columns-
        n, m = X.shape


        # Compute sqaured form Euclidean Distance Matrix for 'time' and spatial attributes-
        time_dist = squareform(pdist(X[:, 0].reshape(n, 1), metric = self.metric))
        euc_dist = squareform(pdist(X[:, 1:], metric = self.metric))

        # filter 'euc_dist' matrix using 'time_dist' matrix-
        dist = np.where(time_dist <= self.eps2, euc_dist, 2 * self.eps1)

        # return self.kmeans_clust_model.transform(X)
        return self.kmeans_clust_model.transform(dist)


# Initialize ST-K-Means object-
st_kmeans_algo = ST_KMeans(
    k = 5, eps1=0.6,
    eps2=9, metric='euclidean',
    n_jobs=1
    )

Y = np.zeros(shape = (501,))

# Train on a chunk of dataset-
st_kmeans_algo.fit(data.loc[:500, ['time', 'x', 'y']], Y)

# Get clustered data points labels-
kmeans_labels = st_kmeans_algo.labels

kmeans_labels.shape
# (501,)


# Get labels for points clustered using trained model-
# kmeans_transformed = st_kmeans_algo.X_transformed
kmeans_transformed = st_kmeans_algo.transform(data.loc[:500, ['time', 'x', 'y']])

kmeans_transformed.shape
# (501, 5)

dtc = DecisionTreeClassifier()

dtc.fit(kmeans_transformed, kmeans_labels)

y_pred = dtc.predict(kmeans_transformed)

# Get model performance metrics-
accuracy = accuracy_score(kmeans_labels, y_pred)
precision = precision_score(kmeans_labels, y_pred, average='macro')
recall = recall_score(kmeans_labels, y_pred, average='macro')

print("\nDT model metrics are:")
print("accuracy = {0:.4f}, precision = {1:.4f} & recall = {2:.4f}\n".format(
    accuracy, precision, recall
    ))

# DT model metrics are:
# accuracy = 1.0000, precision = 1.0000 & recall = 1.0000




# Hyper-parameter Tuning:

# Define steps of pipeline-
pipeline_steps = [
    ('st_kmeans_algo' ,ST_KMeans(k = 5, eps1=0.6, eps2=9, metric='euclidean', n_jobs=1)),
    ('dtc', DecisionTreeClassifier())
    ]

# Instantiate a pipeline-
pipeline = Pipeline(pipeline_steps)

kmeans_transformed.shape, kmeans_labels.shape
# ((501, 5), (501,))

# Train pipeline-
pipeline.fit(kmeans_transformed, kmeans_labels)




# Specify parameters to be hyper-parameter tuned-
params = [
    {
        'st_kmeans_algo__k': [3, 5, 7]
    }
    ]

# Initialize GridSearchCV object-
grid_cv = GridSearchCV(estimator=pipeline, param_grid=params, cv = 2)

# Train GridSearch on computed data from above-
grid_cv.fit(kmeans_transformed, kmeans_labels)

The 'grid_cv.fit()' call gives the following error:

ValueError                                Traceback (most recent call last) <ipython-input-38-e62a8e6db82e> in <module>
      5 
      6 # Train GridSearch on computed data from above-
----> 7 grid_cv.fit(kmeans_transformed, kmeans_labels)

~/.local/lib/python3.8/site-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
    708                 return results
    709 
--> 710             self._run_search(evaluate_candidates)
    711 
    712         # For multi-metric evaluation, store the best_index_, best_params_ and

~/.local/lib/python3.8/site-packages/sklearn/model_selection/_search.py in _run_search(self, evaluate_candidates)    1149     def
_run_search(self, evaluate_candidates):    1150         """Search all candidates in param_grid"""
-> 1151         evaluate_candidates(ParameterGrid(self.param_grid))    1152     1153 

~/.local/lib/python3.8/site-packages/sklearn/model_selection/_search.py in evaluate_candidates(candidate_params)
    680                               n_splits, n_candidates, n_candidates * n_splits))
    681 
--> 682                 out = parallel(delayed(_fit_and_score)(clone(base_estimator),
    683                                                        X, y,
    684                                                        train=train, test=test,

~/.local/lib/python3.8/site-packages/joblib/parallel.py in
__call__(self, iterable)    1002             # remaining jobs.    1003             self._iterating = False
-> 1004             if self.dispatch_one_batch(iterator):    1005                 self._iterating = self._original_iterator is not None    1006 

~/.local/lib/python3.8/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator)
    833                 return False
    834             else:
--> 835                 self._dispatch(tasks)
    836                 return True
    837 

~/.local/lib/python3.8/site-packages/joblib/parallel.py in
_dispatch(self, batch)
    752         with self._lock:
    753             job_idx = len(self._jobs)
--> 754             job = self._backend.apply_async(batch, callback=cb)
    755             # A job can complete so quickly than its callback is
    756             # called before we get here, causing self._jobs to

~/.local/lib/python3.8/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback)
    207     def apply_async(self, func, callback=None):
    208         """Schedule a func to be run"""
--> 209         result = ImmediateResult(func)
    210         if callback:
    211             callback(result)

~/.local/lib/python3.8/site-packages/joblib/_parallel_backends.py in
__init__(self, batch)
    588         # Don't delay the application, to avoid keeping the input
    589         # arguments in memory
--> 590         self.results = batch()
    591 
    592     def get(self):

~/.local/lib/python3.8/site-packages/joblib/parallel.py in
__call__(self)
    253         # change the default number of processes to -1
    254         with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 255             return [func(*args, **kwargs)
    256                     for func, args, kwargs in self.items]
    257 

~/.local/lib/python3.8/site-packages/joblib/parallel.py in <listcomp>(.0)
    253         # change the default number of processes to -1
    254         with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 255             return [func(*args, **kwargs)
    256                     for func, args, kwargs in self.items]
    257 

~/.local/lib/python3.8/site-packages/sklearn/model_selection/_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, error_score)
    542     else:
    543         fit_time = time.time() - start_time
--> 544         test_scores = _score(estimator, X_test, y_test, scorer)
    545         score_time = time.time() - start_time - fit_time
    546         if return_train_score:

~/.local/lib/python3.8/site-packages/sklearn/model_selection/_validation.py in _score(estimator, X_test, y_test, scorer)
    589         scores = scorer(estimator, X_test)
    590     else:
--> 591         scores = scorer(estimator, X_test, y_test)
    592 
    593     error_msg = ("scoring must return a number, got %s (%s) "

~/.local/lib/python3.8/site-packages/sklearn/metrics/_scorer.py in
__call__(self, estimator, *args, **kwargs)
     87                                       *args, **kwargs)
     88             else:
---> 89                 score = scorer(estimator, *args, **kwargs)
     90             scores[name] = score
     91         return scores

~/.local/lib/python3.8/site-packages/sklearn/metrics/_scorer.py in
_passthrough_scorer(estimator, *args, **kwargs)
    369 def _passthrough_scorer(estimator, *args, **kwargs):
    370     """Function that wraps estimator.score"""
--> 371     return estimator.score(*args, **kwargs)
    372 
    373 

~/.local/lib/python3.8/site-packages/sklearn/utils/metaestimators.py in <lambda>(*args, **kwargs)
    114 
    115         # lambda, but not partial, allows help() to work with update_wrapper
--> 116         out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
    117         # update the docstring of the returned function
    118         update_wrapper(out, self.fn)

~/.local/lib/python3.8/site-packages/sklearn/pipeline.py in score(self, X, y, sample_weight)
    617         if sample_weight is not None:
    618             score_params['sample_weight'] = sample_weight
--> 619         return self.steps[-1][-1].score(Xt, y, **score_params)
    620 
    621     @property

~/.local/lib/python3.8/site-packages/sklearn/base.py in score(self, X, y, sample_weight)
    367         """
    368         from .metrics import accuracy_score
--> 369         return accuracy_score(y, self.predict(X), sample_weight=sample_weight)
    370 
    371 

~/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py in accuracy_score(y_true, y_pred, normalize, sample_weight)
    183 
    184     # Compute accuracy for each possible representation
--> 185     y_type, y_true, y_pred = _check_targets(y_true, y_pred)
    186     check_consistent_length(y_true, y_pred, sample_weight)
    187     if y_type.startswith('multilabel'):

~/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py in _check_targets(y_true, y_pred)
     78     y_pred : array or indicator matrix
     79     """
---> 80     check_consistent_length(y_true, y_pred)
     81     type_true = type_of_target(y_true)
     82     type_pred = type_of_target(y_pred)

~/.local/lib/python3.8/site-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
    209     uniques = np.unique(lengths)
    210     if len(uniques) > 1:
--> 211         raise ValueError("Found input variables with inconsistent numbers of"
    212                          " samples: %r" % [int(l) for l in lengths])
    213 

ValueError: Found input variables with inconsistent numbers of samples: [251, 250]

The different dimensions/shapes are:

kmeans_transformed.shape, kmeans_labels.shape, data.loc[:500, ['time', 'x', 'y']].shape                                       
# ((501, 5), (501,), (501, 3))

I don't get it how the error arrives at the "samples: [251, 25]" ?

What's going wrong?


Solution

  • 250 and 251 are respectively the shapes of your train and validation in GridSearchCV

    look at your custom estimator...

    def transform(self, X):
    
        return self.X_transformed
    

    the original transform method doesn't apply any sort of operation it simply returns the train data. we need an estimator that is able to transform the new data (in sour case the validation inside gridsearch) in a flexible way. change the transform method in this way

    def transform(self, X):
    
        return self.kmeans_clust_model.transform(X)