pythonpandasdataframeoptimizationstandardization

Faster method of standardizing DF


I have a df containing roughly 3000 variables and 14000 datapoints.

I need to standardize the df both within group and within df, creating 6000 total variables.

My current implementation is below:

col_names = df.columns.to_list()
col_names.remove('id')
for col in col_name_test:
    df[col + '_id'] = df.groupby('id')[col].transform(lambda x: (x - x.mean())/x.std())
    df[col] = (df[col] - df[col].mean())/ df[col].std()

The above code takes forever to run.

Timing the average speed of both these operations separately shows that the groupby-transform is significantly slower.

Here is a simple example df and the desired output.

dic = {'id': [1,1,1, 2,2,2, 3,3,3,3,3, 4,4,4,4,4 ,5,5,5,5,], 'a': [3,4,2,5,6,7,5,4,3,5,7,5,2,4,8,6,2,3,4,6], 'b': [12,32,21,14,52,62,12,34,52,74,2,34,54,12,45,75,54,23,12,32]}

df = pd.DataFrame(dic)

col_names = df.columns.to_list()
    
col_names.remove('id')
    
for col in col_names:
    df[col+'_id'] = df.groupby('id')[col].transform(lambda x: (x-x.mean())/x.std())
    df[col] = (df[col] - df[col].mean())/ df[col].std()

    id         a         b      a_id      b_id
0    1 -0.879967 -1.060367  0.000000 -0.965060
1    1 -0.312247 -0.154070  1.000000  1.031615
2    1 -1.447688 -0.652533 -1.000000 -0.066556
3    2  0.255474 -0.969737 -1.000000 -1.131971
4    2  0.823195  0.752226  0.000000  0.368549
5    2  1.390916  1.205374  1.000000  0.763422
6    3  0.255474 -1.060367  0.134840 -0.778742
7    3 -0.312247 -0.063441 -0.539360 -0.027324
8    3 -0.879967  0.752226 -1.213560  0.587472
9    3  0.255474  1.749152  0.134840  1.338890
10   3  1.390916 -1.513515  1.483240 -1.120296
11   4  0.255474 -0.063441  0.000000 -0.427765
12   4 -1.447688  0.842856 -1.341641  0.427765
13   4 -0.312247 -1.060367 -0.447214 -1.368847
14   4  1.958637  0.435022  1.341641  0.042776
15   4  0.823195  1.794467  0.447214  1.326070
16   5 -1.447688  0.842856 -1.024695  1.332707
17   5 -0.879967 -0.561904 -0.439155 -0.406826
18   5 -0.312247 -1.060367  0.146385 -1.024080
19   5  0.823195 -0.154070  1.317465  0.098199


Solution

  • Try without a for loop:

    df[[x+'_id' for x in col_names]]=df.groupby('id')[col_names].transform(lambda x: (x - x.mean())/x.std())
    
    df[col_names] = (df[col_names] - df[col_names].mean())/ df[col_names].std()
    

    output of df:

        id         a         b      a_id      b_id
    0    1 -0.879967 -1.060367  0.000000 -0.965060
    1    1 -0.312247 -0.154070  1.000000  1.031615
    2    1 -1.447688 -0.652533 -1.000000 -0.066556
    3    2  0.255474 -0.969737 -1.000000 -1.131971
    4    2  0.823195  0.752226  0.000000  0.368549
    5    2  1.390916  1.205374  1.000000  0.763422
    6    3  0.255474 -1.060367  0.134840 -0.778742
    7    3 -0.312247 -0.063441 -0.539360 -0.027324
    8    3 -0.879967  0.752226 -1.213560  0.587472
    9    3  0.255474  1.749152  0.134840  1.338890
    10   3  1.390916 -1.513515  1.483240 -1.120296
    11   4  0.255474 -0.063441  0.000000 -0.427765
    12   4 -1.447688  0.842856 -1.341641  0.427765
    13   4 -0.312247 -1.060367 -0.447214 -1.368847
    14   4  1.958637  0.435022  1.341641  0.042776
    15   4  0.823195  1.794467  0.447214  1.326070
    16   5 -1.447688  0.842856 -1.024695  1.332707
    17   5 -0.879967 -0.561904 -0.439155 -0.406826
    18   5 -0.312247 -1.060367  0.146385 -1.024080
    19   5  0.823195 -0.154070  1.317465  0.098199