machine-learningneural-networkk-meansbiological-neural-network

More accurate approach than k-mean clustering


In Radial Basis Function Network (RBF Network), all the prototypes (center vectors of the RBF functions) in the hidden layer are chosen. This step can be performed in several ways:

One of the approaches for making an intelligent selection of prototypes is to perform k-mean clustering on our training set and to use the cluster centers as the prototypes. All we know that k-mean clustering is caracterized by its simplicity (it is fast) but not very accurate.

That is why I would like know what is the other approach that can be more accurate than k-mean clustering?

Any help will be very appreciated.


Solution

  • Several k-means variations exist: k-medians, Partitioning Around Medoids, Fuzzy C-Means Clustering, Gaussian mixture models trained with expectation-maximization algorithm, k-means++, etc.

    I use PAM (Partitioning around Medoid) in order to be more accurate when my dataset contain some "outliers" (noise with value which are very different to the others values) and I don't want the centers to be influenced by this data. In the case of PAM a center is called a Medoid.