My focus is on the caveat mentioned in OneHotEncoder's list parameter doc.
list : categories[i] holds the categories expected in the ith column. The passed categories should not mix strings and numeric values within a single feature, and should be sorted in case of numeric values.
I'm trying two different approaches. The first one has a hard-coded data frame.
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
X = pd.DataFrame(
{'city': ['London', 'London', 'Paris', 'NewYork'],
'country': ['UK', 0.2, 'FR', 'US'],
'user_rating': [4, 5, 4, 3]}
)
categorical_features = ['city', 'country']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([("one_hot", one_hot, categorical_features)], remainder="passthrough")
transformed_X = transformer.fit_transform(X)
This code throws a TypeError
at transformed_X = transformer.fit_transform(X)
.
I wanted to try the same, but this time reading data from a CSV file instead of hard-coding it.
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
X = pd.read_csv('data.csv', header=0)
categorical_features = ['city', 'country']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([("one_hot", one_hot, categorical_features)], remainder="passthrough")
transformed_X = transformer.fit_transform(X)
The csv file looks like this.
city,country,user_rating
London,UK,4
London,0.2,5
Paris,FR,4
NewYork,US,3
However, this code does not throw any errors. I can see four different encodings when I print the transformed_X
. It seems like scikit-learn treated 0.2 as a string instead of a float.
Can the mixed data type error be simulated when reading CSV files? Or it is not possible, because pandas infer the column type when reading the data, so the entire column gets the type object
unlike the time of hard-coding data.
Option 1 (pd.to_numeric
)
After reading the data with pd.read_csv
, use pd.to_numeric
:
import pandas as pd
from io import StringIO
s = """city,country,user_rating
London,UK,4
London,0.2,5
Paris,FR,4
NewYork,US,3
"""
X = pd.read_csv(StringIO(s), header=0)
X[categorical_features] = X[categorical_features].apply(
lambda x: pd.to_numeric(x, errors='coerce').fillna(x)
)
Output:
X['country'].tolist()
['UK', 0.2, 'FR', 'US']
Here, we attempt to convert all values to numeric data types; where it fails, we get NaN
values, which we fill again with the original values.
Option 2 (converters
parameter)
With pd.read_csv
, you can pass a custom function to the converters
parameter:
def convert(val):
try:
return float(val)
except ValueError:
return val
categorical_features = ['city', 'country']
converters = {feature: convert for feature in categorical_features}
X = pd.read_csv(StringIO(s), header=0, converters=converters)
# same result
Here, we define a function convert
that attempts to convert each value in a column to float
; when it fails, it returns the original value. We use a dictionary comprehension to map the categorical columns to this function:
converters = {feature: convert for feature in categorical_features}
converters
{'city': <function __main__.convert(val)>,
'country': <function __main__.convert(val)>}
Via converters=converters
, the function gets applied to the applicable columns from the source.
Note that option 1 will be much faster on a sizeable dataset, as pd.to_numeric
is vectorized, meaning its logic is applied to an entire column at once, while converters
will apply a function (like convert
) to each value in a column individually.
Reproducing the error:
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
one_hot = OneHotEncoder()
transformer = ColumnTransformer([("one_hot", one_hot, categorical_features)],
remainder="passthrough")
transformed_X = transformer.fit_transform(X)
Result:
TypeError: Encoders require their input to be uniformly strings or numbers. Got ['float', 'str']