pythonpandasdictionaryswifter

Flatten pandas dataframe column containing list of dictionaries


I am flattening a data frame in which the column contains a list of dictionaries. I have written the code for it. However, it takes around 25 seconds to process only 5000 rows which is a lot.

Here is the sample dataset:

event_date  timestamp   event_name      user_properties
20191117    1.57401E+15 user_engagement [{'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'first_open_time', 'value': {'string_value': None, 'int_value': 1573974000000, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1573971590380000}}]
20191117    1.57401E+15 screen_view     [{'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'first_open_time', 'value': {'string_value': None, 'int_value': 1573974000000, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1573971590380000}}]
20191117    1.57401E+15 user_engagement [{'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'first_open_time', 'value': {'string_value': None, 'int_value': 1573974000000, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1573971590380000}}]
20191117    1.57401E+15 user_engagement [{'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'first_open_time', 'value': {'string_value': None, 'int_value': 1573974000000, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1573971590380000}}]
20191117    1.57401E+15 user_engagement [{'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1574005142713000}}, {'key': 'first_open_time', 'value': {'string_value': None, 'int_value': 1573974000000, 'float_value': None, 'double_value': None, 'set_timestamp_micros': 1573971590380000}}]

Here is the parsed dataframe:

flattened dataframe

The result contains the 'key' as column however, if there is 'set_timestamp_micros' key in the dictionary then the syntax of the column is {key}.set_timestamp_micros.

Here is the code to flatten the dataframe:

def normalize_complex_column_v2(df, df_copy, column):
    col_list = []
    for index,row in df.iterrows():
        for element in row[column]:
            cols = [element['key']]
            cols += ["%s.%s"%(element['key'],key) for key in element['value'].keys() if 'timestamp' in key]
            df_copy= df_copy.reindex(columns=list(dict.fromkeys(df_copy.columns.tolist() + cols)))
            df_copy.loc[index,cols] = [value for key,value in element['value'].items() if value is not None]
    df_copy.drop([column], axis=1, inplace=True)
    return df_copy

How do I optimize this code?

UPDATE: Is there any way I can use swifter to optimize my function?

Issue with Numba:

<ipython-input-101-15265d3af7fb>:1: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "flatten_dataframe_column" failed type inference due to: Untyped global name 'defaultdict': cannot determine Numba type of <class 'type'>

File "<ipython-input-101-15265d3af7fb>", line 4:
def flatten_dataframe_column(df,column,fetch_timestamp=True):
    <source elided>
    temp_dict = df[column].to_dict()
    new_dict = defaultdict(dictLoweringError: Failed in object mode pipeline (step: object mode backend)
$22.3.182

File "<ipython-input-101-15265d3af7fb>", line 16:
def flatten_dataframe_column(df,column,fetch_timestamp=True):
    <source elided>
                        elements['key'] : [value for key,value in elements['value'].items() \
                                                    if (value is not None and 'timestamp' not in key)][0]
                                                    ^

[1] During: lowering "$22.3.182 = unary(fn=<built-in function not_>, value=$22.3.182)" at <ipython-input-101-15265d3af7fb> (16)

-------------------------------------------------------------------------------
This should not have happened, a problem has occurred in Numba's internals.
You are currently using Numba version 0.47.0.

Please report the error message and traceback, along with a minimal reproducer
at: https://github.com/numba/numba/issues/new

If more help is needed please feel free to speak to the Numba core developers
directly at: https://gitter.im/numba/numba

Thanks in advance for your help in improving Numba!

)

Solution

  • I converted the dataframe column to dictionary and processed the data there. Then converted the processed dictionary to dataframe and joined with original dataframe by 'index'. It took around around 8 seconds to process 500K records.

    def flatten_dataframe_column(df,column):
        temp_dict = df[column].to_dict()
        new_dict = defaultdict(dict)
        for item in temp_dict.items():
            for elements in item[1]:
                   new_dict[item[0]].update(
                            {
                                (elements['key']+'.set_timestamp_micros') : elements['value']['set_timestamp_micros']
                            }
                    )
                    new_dict[item[0]].update(
                        { 
                            elements['key'] : [value for key,value in elements['value'].items() \
                                                        if (value is not None and 'timestamp' not in key)][0]
                        }
                    )
        return pd.DataFrame.from_dict(new_dict,orient='index')
    

    If anyone can think of a more optimal solution, please do post.