The problem of large categorical variables in machine learning

Recently, I was writing an article about dealing with categorical variables using techniques like one-hot encoding or dummy coding. I wondered what the correct approach is when the categorical variable has many unique values. After all, any encoding would create a vast number of new features.

The first approach is not very sophisticated. We can replace the categories with a group of categories. For example, if the feature contains the names of products in a grocery store, we can replace the names with generic categories of products like a vegetable, cheese, bread, and so on.

Feature hashing

What if there is no hierarchy? What if it is not possible to group categories in any meaningful way? I started looking for a solution, and I found a technique called “feature hashing.”

In short, we are supposed to define a hashing function which reduces the space of the categorical variable because it maps many categories to the same hash. Fortunately, if we use Scikit-learn we don’t need to do it because such a function already exists.

As an input, we must give it the number of features. This value denotes the number of columns in the output. The number of columns it can use to encode categories. It is not the number of groups we want to get!

from sklearn.feature_extraction import FeatureHasher
import pandas as pd

data = pd.DataFrame([
    ['value_1', 23],
    ['value_2', 13],
    ['value_3', 42],
    ['value_4', 13],
    ['value_2', 46],
    ['value_1', 28],
    ['value_2', 32],
    ['value_3', 87],
    ['value_4', 98],
    ['value_5', 86],
    ['value_3', 45],
    ['value_2', 73],
    ['value_1', 36],
    ['value_3', 93]
], columns = ['feature1', 'feature2'])

feature_hasher = FeatureHasher(n_features = 3, input_type = 'string')

pd.concat([
pd.DataFrame(feature_hasher.fit_transform(data['feature1']).toarray()),
data['feature2']], axis = 1)

There is one problem with the FeatureHasher class in Scikit-learn. I could not get it running inside a ColumnTransformer pipeline, because it throws an error.

I have reported an error. If you want it fixed too, please upvote the issue ;)

Older post

Encoding categorical variables in machine learning

One-hot encoding, dummy coding, and effect coding in Scikit learn and Pandas

Newer post

Ludwig machine learing model in Kaggle

My first attempt to use Ludwig