I understand that scaling means centering the mean(mean=0) and making unit variance(variance=1).
But, What is the difference between preprocessing.scale(x)
and preprocessing.StandardScalar()
in scikit-learn?
Those are doing exactly the same, but:
preprocessing.scale(x)
is just a function, which transforms some datapreprocessing.StandardScaler()
is a class supporting the Transformer APII would always use the latter, even if i would not need inverse_transform
and co. supported by StandardScaler()
.
Excerpt from the docs:
The function scale provides a quick and easy way to perform this operation on a single array-like dataset
The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing set. This class is hence suitable for use in the early steps of a sklearn.pipeline.Pipeline