We have an application that tries to migrate existing application logs of about a year into google cloud logging. I have created a custom logging bucket with a retention time of 400d and a custom sink that writes all the requests for a given logName into this bucket. I also excluded this logName from the _DEFAULT bucket.
Now if i write log entries for the given logName, with timestamps from all over the last year, all entires that are older than 30d are discarded. I even increased the retention time of the _DEFAULT bucket to e.g. 60d but still no logs older than 30d can be written. That means that the write successfully happened for all log entries but only entries that are older than 30d do not show up in the Log Explorer.
According to Routing - Log Retention the bucket should define the retention period:
Cloud Logging retains logs according to retention rules applying to the log bucket type where the logs are held.
Also there doesn't seem to be any quota that should limit this.
Does anybody know why entries with a timestamp older than 30d are silently discarded despite properly configured logging bucket and sink?
Or are there any better solutions to import logs into GCL without having to write a custom app to do so?
Cloud Logging currently has time bounds on the timestamp of the LogEntries it can ingest in its storage. Logs can only be ingested if the timestamps are within the last 30 days or 1 day in the future. This applies even if your bucket retention period is set to 60 days or more.
This is a current limitation and may change in the future.
Disclaimer: I work in Cloud Logging