I am trying to import a CSV file to an existing table. The timestamps defined in the CSV file are in milliseconds. If I try to importing using the REST API or web interface I get an error on all timestamp columns.
This is how my files look like:
id,exchange,symbol,date,price,amount,sell
829895171,bw,suiusdt_perp,1741219201694,2.6313,0.9,true
And this is the schema of my table
CREATE TABLE crypto_trades_binance_btcusd_perp (
id LONG,
exchange SYMBOL,
symbol SYMBOL,
date TIMESTAMP,
price DOUBLE,
amount DOUBLE,
sell BOOLEAN
) TIMESTAMP(date) PARTITION BY MONTH
I know that when importing using SQL statements, I can cast values, e.g.
INSERT INTO crypto_trades_binance_btcusd_perp
VALUES ('829895171', 'bw', 'suiusdt_perp', cast(1741219201694 * 1000 as TIMESTAMP), 2.6313,0.9,true)
Is it possible to cast the timestamp values when importing from a CSV with the REST API or web interface as well?
QuestDB does not support, as of today, any flags for this type of casting as part of CSV import.
Here are some options we have:
Pre-process the data set to convert the timestamps from millis to micros. If we are intending to import lots of data, one option would be to create Parquet files. Then copy them to the QuestDB import folder, and read them with read_parquet('file.parquet'). We can read from this file and use an INSERT INTO SELECT to copy it to another table.
Import into a non partitioned table as a DATE, and then INSERT INTO a partitioned table as TIMESTAMP. You would be using twice the storage for a while, but then you can just drop the initial staging table.
Read the CSV line-by-line and convert then send via the ILP client.