I want to be able to add a timestamp the filename I'm writing to s3. So far I've been able to write files to AWS S3 using example below. Can someone guide me as to how do I go about putting datetime stamp in the file name?
copy into @s3bucket/something.csv.gz
from (select * from mytable)
file_format = (type=csv FIELD_OPTIONALLY_ENCLOSED_BY = '"' compression='gzip' )
single=true
header=TRUE;
Thanks in advance.
The syntax for defining a path inside of a stage or location portion of the COPY INTO
statement does not allow for functions to dynamically define it in SQL.
However, you can use a stored procedure to accomplish building dynamic queries, using JavaScript Date APIs and some string formatting.
Here's a very trivial example for your use-case, with some code adapted from another question:
CREATE OR REPLACE PROCEDURE COPY_INTO_PROCEDURE_EXAMPLE()
RETURNS VARIANT
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
var rows = [];
var n = new Date();
// May need refinement to zero-pad some values or achieve a specific format
var datetime = `${n.getFullYear()}-${n.getMonth() + 1}-${n.getDate()}-${n.getHours()}-${n.getMinutes()}-${n.getSeconds()}`;
var st = snowflake.createStatement({
sqlText: `COPY INTO '@s3bucket/${datetime}_something.csv.gz' FROM (SELECT * FROM mytable) FILE_FORMAT=(TYPE=CSV FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION='gzip') SINGLE=TRUE HEADER=TRUE;`
});
var result = st.execute();
result.next();
rows.push(result.getColumnValue(1))
return rows;
$$
To execute, run:
CALL COPY_INTO_PROCEDURE_EXAMPLE();
The above is missing perfected date format handling (zero padding months, days, hours, minutes, seconds), error handling (if the COPY INTO
fails), parameterisation of input query, etc. but it should give a general idea on how to achieve this.