I have a rather wide dataset (700k rows and 100+ columns) with multiple entity_id
and multiple datetime intervals.
There are many columns attr
associated with different values.
I am trying to cut those intervals to integrate specific_dt
for each of the entity_id
.
When splitting time intervals, newly created intervals inherit their parents attr
values.
Below is a small reproducible example
have = {'entity_id': [1,1,2,2],
'start_date': ['2014-12-01 00:00:00', '2015-03-01 00:00:00', '2018-02-12 00:00:00', '2019-02-01 00:00:00'],
'end_date': ['2015-02-28 23:59:59', '2015-05-31 23:59:59', '2019-01-31 23:59:59', '2023-05-28 23:59:59'],
'attr1': ['A', 'B', 'D', 'J']}
have = pd.DataFrame(data=have)
have
entity_id start_date end_date attr1
0 1 2014-12-01 00:00:00 2015-02-28 23:59:59 A
1 1 2015-03-01 00:00:00 2015-05-31 23:59:59 B
2 2 2018-02-12 00:00:00 2019-01-31 23:59:59 D
3 2 2019-02-01 00:00:00 2023-05-28 23:59:59 J
# Specific dates to integrate
specific_dt = ['2015-01-01 00:00:00', '2015-03-31 00:00:00']
The expected output is the following
want
entity_id start_date end_date attr1
0 1 2014-12-01 2014-12-31 23:59:59 A
0 1 2015-01-01 2015-02-28 23:59:59 A
1 1 2015-03-01 2015-03-30 23:59:59 B
1 1 2015-03-31 2015-05-31 23:59:59 B
2 2 2018-02-12 2019-01-31 23:59:59 D
3 2 2019-02-01 2023-05-28 23:59:59 J
I have been able to achieve the desired output with the following code
# Create a list to store the new rows
new_rows = []
# Iterate through each row in the initial DataFrame
for index, row in have.iterrows():
id_val = row['entity_id']
start_date = pd.to_datetime(row['start_date'])
end_date = pd.to_datetime(row['end_date'], errors = 'coerce')
# Iterate through specific dates and create new rows
for date in specific_dt:
specific_date = pd.to_datetime(date)
# Check if the specific date is within the interval
if start_date < specific_date < end_date:
# Create a new row with all columns and append it to the list
new_row = row.copy()
new_row['start_date'] = start_date
new_row['end_date'] = specific_date - pd.Timedelta(seconds=1)
new_rows.append(new_row)
# Update the start_date for the next iteration
start_date = specific_date
# Add the last part of the original interval as a new row
new_row = row.copy()
new_row['start_date'] = start_date
new_row['end_date'] = end_date
new_rows.append(new_row)
# Create a new DataFrame from the list of new rows
want = pd.DataFrame(data=new_rows)
However it is extremely slow (10min+) for my working dataset. Is it possible to optimize it (perhaps by getting rid of the for loops)?
For reference, I am able to perform this in sas
in a matter of seconds using a simple data step (example below is for one of the two specific date to integrate).
data want;
set have;
by entity_id start_date end_date;
if start_date < "31MAR2015"d < end_date then
do;
retain _start _end;
_start = start_date;
_end = end_date;
end_date = "30MAR2015"d;
output;
start_date = "31MAR2015"d;
end_date = _end;
output;
end;
else output;
drop _start _end;
run;
You can try this:
have["start_date"] = pd.to_datetime(have["start_date"])
have["end_date"] = pd.to_datetime(have["end_date"])
specific_dt = [
pd.to_datetime("2015-01-01 00:00:00"),
pd.to_datetime("2015-03-31 00:00:00"),
]
l = [have]
for dt in specific_dt:
mask = (have["start_date"] < dt) & (have["end_date"] > dt)
new_df = have.loc[mask]
have.loc[mask, "end_date"] = dt - pd.Timedelta(seconds=1)
new_df.loc[:, "start_date"] = dt
l.append(new_df)
want = pd.concat(l).sort_values(["entity_id", "attr1"])
entity_id start_date end_date attr1
0 1 2014-12-01 2014-12-31 23:59:59 A
0 1 2015-01-01 2015-02-28 23:59:59 A
1 1 2015-03-01 2015-03-30 23:59:59 B
1 1 2015-03-31 2015-05-31 23:59:59 B
2 2 2018-02-12 2019-01-31 23:59:59 D
3 2 2019-02-01 2023-05-28 23:59:59 J