pythonpandasdataframegraphlabsframe

Transforming grouped data by converting grouping-categories into fields (using GraphLab or Panda's DataFrame)


I have the following records grouped by the user_id and action columns.

user_id | action | count
1       | read   | 15
1       | write  | 5
1       | delete | 7
2       | write  | 2
3       | read   | 9
3       | write  | 1
3       | delete | 2

I want to convert this table into the following format where each action is now a column and the rows are the count values.

user_id | read | write | delete
1       | 15   | 5     | 7
2       | 0    | 2     | 0
3       | 9    | 1     | 2

I know how to do this using loops but I am curious if there is a more efficient way of doing this in GraphLab create SFrame or Panda's DataFrame.

I appreciate any help!


Solution

  • You can use pivot with fillna and last cast float to int by astype:

    df = df.pivot(index='ser_id', columns='action', values='count').fillna(0).astype(int)
    print (df)
    action  delete  read  write
    ser_id                     
    1            7    15      5
    2            0     0      2
    3            2     9      1
    

    Another solution with set_index and unstack:

    df = df.set_index(['ser_id','action'])['count'].unstack(fill_value=0)
    print (df)
    action  delete  read  write
    ser_id                     
    1            7    15      5
    2            0     0      2
    3            2     9      1
    

    Solution if duplicates in column ser_id and action and pivot or unstack cannot be used is groupby with aggregating mean or sum and reshape by unstack:

    df = df.groupby(['ser_id','action'])['count'].mean().unstack(fill_value=0)
    print (df)
    action  delete  read  write
    ser_id                     
    1            7    15      5
    2            0     0      2
    3            2     9      1
    

    Timings:

    #random dataframe
    np.random.seed(100)
    N = 10000
    df = pd.DataFrame(np.random.randint(100, size=(N,3)), columns=['user_id','action', 'count'])
    #[10000000 rows x 2 columns]
    print (df)
    
    In [124]: %timeit (df.groupby(['user_id','action'])['count'].mean().unstack(fill_value=0))
    100 loops, best of 3: 5.5 ms per loop
    
    In [125]: %timeit (df.pivot_table('count', 'user_id', 'action', fill_value=0))
    10 loops, best of 3: 35.9 ms per loop