pythonpandasdataframegoogle-bigquery

Erroneous pandas rolling results with time window in grouped by dataframe imported from BigQuery


I would like to preface this by apologizing for the lack of reproducibility of my question because if I convert my dataframe to a dictionary and turn that into a dataframe again I am not getting the issue.

Nevertheless, this is the query I am using on BigQuery:

SELECT
published_at,
from_author_id,
text
FROM 
`project.message.message`
"""

I then turn into into a datrame using

client = bigquery.Client(location="europe-west1", project="project")
df = client.query(sql).to_dataframe()

Now running the following gives me an erroneous output:

import pandas as pd
#df['published_at'] = pd.to_datetime(df['published_at'])
df = df.sort_values(by=['from_author_id', 'published_at'])
df.groupby('from_author_id').rolling('3s', on='published_at')['text'].count()

using .to_datetime() has no impact on the result of the rolling function

from_author_id                        published_at                    
0001fcf4-94f5-4e42-8444-0cb6c2870bdc  2024-08-19 18:28:50.197000+00:00    1.0
                                      2024-08-19 18:33:26.837000+00:00    2.0
                                      2024-08-19 18:33:42.960000+00:00    3.0
                                      2024-08-19 18:33:57.083000+00:00    4.0
                                      2024-08-19 18:34:18.863000+00:00    5.0
                                                                         ... 
fff7a574-a2fe-4eac-b7c6-d5de8dc5ff0c  2024-08-19 16:26:24.252000+00:00    6.0
                                      2024-08-19 16:32:40.697000+00:00    7.0
                                      2024-08-19 16:32:42.013000+00:00    8.0
                                      2024-08-19 18:09:03.469000+00:00    1.0
                                      2024-08-19 18:09:04.979000+00:00    2.0

As you can see there is more than 3 seconds between each of the messages of the first author and so the rolling count should return 1.

Interestingly this function does produce the desired output:

def compute_correct_rolling_count(df, window_seconds=3):
    msg_counts = []
    for _, group_df in df.groupby('from_author_id'):
        count_list = []
        for i in range(len(group_df)):
            start_time = group_df.iloc[i]['published_at'] - pd.Timedelta(seconds=window_seconds)
            count = group_df[(group_df['published_at'] > start_time) & (group_df['published_at'] <= group_df.iloc[i]['published_at'])].shape[0]
            count_list.append(count)
        msg_counts.extend(count_list)
    return msg_counts

# Compute the rolling count within a 3-second window for each author
df['msg_count_last_3secs'] = compute_correct_rolling_count(df, window_seconds=3)

Schema for table project.message.message: published_at (TIMESTAMP) from_author_id (STRING) text (STRING) other fields

Additionally, Default rounding mode ROUNDING_MODE_UNSPECIFIED Partitioned by DAY Partitioned on field published_at

I too am using google-cloud-bigquery 3.25.0


Solution

  • I believe the issue was because published_at was of type datetime64[us, UTC] instead of datetime64[ns, UTC] as can be seen here Confusing output with pandas rolling window with datetime64[us] dtype