A scikit-learn style transformer that scales numeric variables to normal distributions.
Input normalization for neural networks is very important. Gauss Rank is an effective algorithm for converting numeric variable distributions to normals. It is based on rank transformation. The first step is to assign a spacing between -1 and 1 to the sorted features, then apply the inverse of error function erfinv
to make it look like a Gaussian.
This generally works much better than Standard or Min Max Scaler.
- Interview of the Kaggle competition winner (Michael Jahrer)
- Blog post introducing Gauss Rank's concept and simple implementation (Zygmunt Zając)
Gauss Rank Scaler is a fully compatible sklearn transformer that can be used in pipelines or existing scripts. Supported input formats include numpy arrays and pandas dataframes. All columns passed to the transformer are properly scaled.
from gauss_rank_scaler import GaussRankScaler
import pandas as pd
from sklearn.datasets import load_boston
%matplotlib inline
# prepare some data
bunch = load_boston()
df_X_train = pd.DataFrame(bunch.data[:250], columns=bunch.feature_names)
df_X_test = pd.DataFrame(bunch.data[250:], columns=bunch.feature_names)
# plot histograms of two numeric variables
_ = df_X_train[['CRIM', 'DIS']].hist()
# scale the numeric variables with Gauss Rank Scaler
scaler = GaussRankScaler()
df_X_new_train = scaler.fit_transform(df_X_train[['CRIM', 'DIS']])
# plot histograms of the scaled variables
_ = pd.DataFrame(df_X_new_train, columns=['CRIM', 'DIS']).hist()
# scale test dataset with the fitted scaler
df_X_new_test = scaler.transform(df_X_test[['CRIM', 'DIS']])