I've got a table that looks something like this:
create table room_server_metrics (
namespace varchar(36) not null,
session_id varchar(36) not null,
primary key (namespace, session_id),
created_at timestamptz not null default now(),
updated_at timestamptz not null default now(),
);
When I try to sum the length of all sessions, I get this:
database=> select sum(updated_at - created_at) as total_time from room_server_metrics;
total_time
----------------------------
94 days 60951:01:56.381483
94 days + 60951 hours is ~2633 days.
I wasn't sure if the print out was showing 94 days as an easier metric to read, but if I print out the delta in seconds, I get this:
database=> select extract(epoch from sum(updated_at - created_at)) as total_time from room_server_metrics;
total_time
----------------------------
227546617.181704
227546617.181704 seconds is also ~2633 days.
Why does the first query not reduce the number of hours past 94 days? I can't for the life of me figure this one out haha.
Max
Thanks to Adrian's comment and some postgres docs research I found this:
Internally interval values are stored as months, days, and seconds. This is done because the number of days in a month varies, and a day can have 23 or 25 hours if a daylight savings time adjustment is involved. The months and days fields are integers while the seconds field can store fractions. Because intervals are usually created from constant strings or timestamp subtraction, this storage method works well in most cases. Functions justify_days and justify_hours are available for adjusting days and hours that overflow their normal ranges.
It seems postgres takes each row's timespan and adds up the days/hours/minutes separately. In this case, there were 94 rows with at least 1 day worth of time in the interval that was summed up. Using the justify_hours
function does indeed solve the problem:
database=> select justify_hours(sum(updated_at - created_at)) as total_time from room_server_metrics;
total_time
---------------------------
2637 days 08:33:22.773599