postgresqlprometheustimescaledbpg-stat-statementsprometheus-postgres-exporter

How to get pg_stat_user_tables n_tup_ins for timescale's compressed table?


We have a Prometheus Postgres Exporter set up and expect we can get stats of rows inserted into table

pg_stat_user_tables:
      query: |
       SELECT
         current_database() datname,
         schemaname,
         relname,
         seq_scan,
         seq_tup_read,
         idx_scan,
         idx_tup_fetch,
         n_tup_ins,
         n_tup_upd,
         n_tup_del,
         n_tup_hot_upd,
         n_live_tup,
         n_dead_tup,
         n_mod_since_analyze,
         COALESCE(last_vacuum, '1970-01-01Z') as last_vacuum,
         COALESCE(last_autovacuum, '1970-01-01Z') as last_autovacuum,
         COALESCE(last_analyze, '1970-01-01Z') as last_analyze,
         COALESCE(last_autoanalyze, '1970-01-01Z') as last_autoanalyze,
         vacuum_count,
         autovacuum_count,
         analyze_count,
         autoanalyze_count
       FROM
         pg_stat_user_tables
      metrics:
        - datname:
            usage: "LABEL"
            description: "Name of current database"
        - schemaname:
            usage: "LABEL"
            description: "Name of the schema that this table is in"
        - relname:
            usage: "LABEL"
            description: "Name of this table"
        - seq_scan:
            usage: "COUNTER"
            description: "Number of sequential scans initiated on this table"
        - seq_tup_read:
            usage: "COUNTER"
            description: "Number of live rows fetched by sequential scans"
        - idx_scan:
            usage: "COUNTER"
            description: "Number of index scans initiated on this table"
        - idx_tup_fetch:
            usage: "COUNTER"
            description: "Number of live rows fetched by index scans"
        - n_tup_ins:
            usage: "COUNTER"
            description: "Number of rows inserted"
        - n_tup_upd:
            usage: "COUNTER"
            description: "Number of rows updated"
        - n_tup_del:
            usage: "COUNTER"
            description: "Number of rows deleted"
        - n_tup_hot_upd:
            usage: "COUNTER"
            description: "Number of rows HOT updated (i.e., with no separate index update required)"
        - n_live_tup:
            usage: "GAUGE"
            description: "Estimated number of live rows"
        - n_dead_tup:
            usage: "GAUGE"
            description: "Estimated number of dead rows"
        - n_mod_since_analyze:
            usage: "GAUGE"
            description: "Estimated number of rows changed since last analyze"
        - last_vacuum:
            usage: "GAUGE"
            description: "Last time at which this table was manually vacuumed (not counting VACUUM FULL)"
        - last_autovacuum:
            usage: "GAUGE"
            description: "Last time at which this table was vacuumed by the autovacuum daemon"
        - last_analyze:
            usage: "GAUGE"
            description: "Last time at which this table was manually analyzed"
        - last_autoanalyze:
            usage: "GAUGE"
            description: "Last time at which this table was analyzed by the autovacuum daemon"
        - vacuum_count:
            usage: "COUNTER"
            description: "Number of times this table has been manually vacuumed (not counting VACUUM FULL)"
        - autovacuum_count:
            usage: "COUNTER"
            description: "Number of times this table has been vacuumed by the autovacuum daemon"
        - analyze_count:
            usage: "COUNTER"
            description: "Number of times this table has been manually analyzed"
        - autoanalyze_count:
            usage: "COUNTER"
            description: "Number of times this table has been analyzed by the autovacuum daemon"

But for the tables that are compressed we are not getting any stats, like it isn't calculated by timescale.

Table is created as below:

– Create hypertable
SELECT create_hypertable('our_table', 'timestamp', 'unit_id', 1,
if_not_exists => true, chunk_time_interval => INTERVAL '1 day'
);

-- Add compression policy
SELECT add_compression_policy('our_table', INTERVAL '1 day');

-- Enable compression
ALTER TABLE our_tableSET (
 timescaledb.compress,
 timescaledb.compress_segmentby = 'unit_id'
);

– Compress chunks older then 1 day
SELECT compress_chunk(i)
FROM show_chunks('our_table', older_than => INTERVAL '1 day') i;

What is a pg_stats to get rows inserted for such kind of tables? Is it ever possible?

It is clearly seen, that the stats is calculated for individual chunks and we see the chunk input rate. Is there a way to query some timescale stuff to see whether specific logical table data are inserted (see not chinks, but ourtable stats)?

pg_stat_user_tables_n_tup_ins{datname="postgres",relname="_hyper_1_489_chunk",schemaname="_timescaledb_internal",server="timescaledb2:5432"} 92106

Solution

  • I'm not sure I understood what do you mean by "all affected tables", but to get all hypertables in a single query, you can cast the hypertable name with ::regclass. Example from some playground database with a few random hypertables:

    playground=# select hypertable_name, approximate_row_count(hypertable_name::regclass) from timescaledb_information.hypertables ;
    ┌──────────────────┬───────────────────────┐
    │ hypertable_name  │ approximate_row_count │
    ├──────────────────┼───────────────────────┤
    │ batteries        │                     0 │
    │ conditions       │               1000000 │
    │ default_forecast │                     0 │
    │ forecast         │               1321500 │
    │ metrics          │               3162240 │
    │ ticks            │                  7202 │
    └──────────────────┴───────────────────────┘
    (6 rows)