pythonpandas

Find average rate per group in specific years using groupby transform


I'm trying to find a better/faster way to do this. I have a rather large dataset (~200M rows) with individual dates per row. I want to find the average yearly rate per group from 2018 to 2019. I know I could create a small df with the results and merge it back in but, I was trying to find a way to use transform. Not sure if it would just be faster to merge. Extra points for one-liners.

Sample data

rng = np.random.default_rng(seed=123)
df = pd.DataFrame({'group':rng.choice(list('ABCD'), 100),
                   'date':[(pd.to_datetime('2018')+pd.Timedelta(days=x)).normalize() for x in rng.integers(0, 365*5, 100)],
                   'foo':rng.integers(1, 100, 100),
                   'bar':rng.integers(50, 200, 100)})
df['year'] = df['date'].dt.year

This works

#find average 2018 and 2019 'foo' and 'bar'
for col in ['foo', 'bar']:
    for y in [2018, 2019]:
        df[col+'_'+str(y)+'_total'] = df.groupby('group')['year'].transform(lambda x: df.loc[x.where(x==y).dropna().index, col].sum())

#find 2018 and 2019 rates
for y in [2018, 2019]:
    df['rate_'+str(y)] =  df['foo_'+str(y)+'_total'].div(df['bar_'+str(y)+'_total'])

#find average rate
df['2018_2019_avg_rate'] = df[['rate_2018', 'rate_2019']].mean(axis=1)

Thing's I've tried that don't quite work (I'm using apply to test if it works before switching to transform)

#gives yearly totals for each year and each column, but further 'apply'ing to find rates then averaging isn't working after I switch to transform
df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[x.where(x.between(2018, 2019)).dropna().index, ['foo', 'bar']].sum())

#close but is averaging too early
df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[i, 'foo'].sum()/denom if (denom:=df.loc[i:=x.where(x.between(2018, 2019)).dropna().index, 'bar'].sum())>0 else np.nan)

Solution

  • You can't perform multiple filtering/aggregations efficiently with a groupby.transform. You will have to loop.

    A more efficient approach would be to combine a pivot_table + merge:

    cols = ['foo', 'bar']
    years = [2018, 2019]
    
    tmp = (df[df['year'].isin(years)]
           .pivot_table(index='group', columns='year',
                        values=cols, aggfunc='sum')
           [cols]
           .pipe(lambda x: x.join(pd.concat({'rate': x['foo'].div(x['bar'])}, axis=1)))
          )
    
    avg_rate = tmp['rate'].mean(axis=1)
    
    tmp.columns = tmp.columns.map(lambda x: f'{x[0]}_{x[1]}_total')
    
    tmp[f'{"_".join(map(str, years))}_avg_rate'] = avg_rate
    
    out = df.merge(tmp, left_on='group', right_index=True)
    

    Output:

       group       date  foo  bar  year  foo_2018_total  foo_2019_total  bar_2018_total  bar_2019_total  rate_2018_total  rate_2019_total  2018_2019_avg_rate
    0      A 2022-03-11   59   91  2022             343             270             972             875         0.352881         0.308571            0.330726
    1      C 2018-08-22   56   52  2018             175             325             331             902         0.528701         0.360310            0.444506
    2      C 2019-04-24   47   89  2019             175             325             331             902         0.528701         0.360310            0.444506
    3      A 2019-04-16   43  102  2019             343             270             972             875         0.352881         0.308571            0.330726
    4      D 2019-11-25    3   56  2019             126             222             224             696         0.562500         0.318966            0.440733
    5      A 2018-01-06   86  148  2018             343             270             972             875         0.352881         0.308571            0.330726
    ...
    99     B 2018-02-25   32   90  2018             253             204             703             400         0.359886         0.510000            0.434943