I am running QuestDB on a container, with this definition
questdb:
image: questdb/questdb:8.2.1
container_name: questdb_trades
restart: always
ports:
- "${QUESTDB_HTTP_PORT}:9000" # Web Console, ILP over HTTP and REST API
- "${QUESTDB_PG_PORT}:8812" # PG port
environment:
QDB_HTTP_PORT: 9000
QDB_PG_PORT: 8812
volumes:
- questdb:/root/.questdb
I would like to create a table on server startup
-- Table for 'kline'
CREATE TABLE kline (
timestamp TIMESTAMP, -- K-line start time
symbol SYMBOL INDEX CAPACITY 100000, -- Contract name (indexed for fast lookups)
interval SYMBOL, -- K-line interval (e.g., Min1, Min5, Day1)
open_price DOUBLE, -- Opening price
close_price DOUBLE, -- Closing price
high_price DOUBLE, -- Highest price
low_price DOUBLE, -- Lowest price
total_amount DOUBLE, -- Total transaction amount (a)
total_volume DOUBLE, -- Total transaction volume (q)
window_start TIMESTAMP, -- Window start time (t)
source_ts TIMESTAMP -- Source timestamp (ts)
)
TIMESTAMP(timestamp) PARTITION BY MONTH;
What's the best and easiest way to auto the tables' migration into the database??
Since the script above is using docker compose, in this case we can just start a second service which depends on the questdb service and call the REST API of Questdb.
It would be tempting to just change the Entrypoint for the QuestDB service, but I prefer to have a separate service for cleanliness, and also because the questdb image has the minimum needed libraries and curl/wget or similar are not included there. I once did a workaround for this using perl, which is available on the image, but it is messy and I prefer to use use a lightweight service on startup.
This is a minimum working docker-compose.yml
services:
questdb:
image: questdb/questdb:8.2.1
container_name: questdb_trades
restart: always
ports:
- "${QUESTDB_HTTP_PORT}:9000" # Web Console, ILP over HTTP and REST API
- "${QUESTDB_PG_PORT}:8812" # PG port
- "${QUESTDB_METRICS_PORT}:9003" # Metrics and Health Check
environment:
QDB_HTTP_PORT: 9000
QDB_PG_PORT: 8812
QDB_METRICS_PORT: 9003
table_creator:
image: curlimages/curl
depends_on:
- questdb
command: >
/bin/sh -c "
# wait for the questdb server to be fully operational
while ! curl -s questdb:9003; do
sleep 1;
done;
# issue the desired API call
curl -G http://questdb:9000/exec --data-urlencode 'query=SHOW TABLES;'
The changes I did were opening the 9003 port on the questdb container (so I have access to the health check) and then just adding the curl container checking the health server and issuing whichever command we need.
We could also add a volume to the curl container, so we can have the startup query on the local filesystem and use CURL to read the content of the query from the file on startup.
Note when calling the QuestDB API, we can only execute a SQL statement per call, so we would need to issue as many rest calls as tables we need to create. We could create a file per statement on the local file system and have a bash loop to read over the files and send the statements one by one. If order is important, we can use prefixes such as 000_ 111_ 222_ 333_ so we can easily sort them from the bash script.
Since this will run on every start of the stack, we might want to use CREATE TABLE IF NOT EXISTS, so command will be ignored, rather than return an error if the table is already there.
Note also I omitted env variables on the second container, but we probably want to add the ports as env variables.