I currently have an api project that is kind of a middleware API. It get's requests and redirects it to other APIs and caches some things in redis for faster access. Therefore I don't really need a database and didn't set any up. As it also sends out e-mails I would like to set-up the queue to use redis. As far as I understand this is quite simple changing the queue driver to redis. What I didn't figure out is if I need a database for the failed jobs (as Laravel is usually creating that out of the box for that). My further plan is to use horizon as a dashboard for my queues. As far as I understand horizon also saves the failed jobs. I found a comment that they are being flushed from time to time which was written seven years ago. So I am not sure if something changed there. So my question is: Do I need to setup a database (e.g. mysql) to make sure no failed_jobs get lost or can I solve this by using horizon?
The FailedJobsController
in horizon is using the JobRepository
which is implemented by the RedisJobRepository
So far I didn't setup the whole project because I didn't know how to figure out if horizon really flushes the failed jobs.
Yes, by default failed jobs older than 7 days are cleared by horizon. The implementation is found here. The listener is triggered on every loop of the master worker. You can increase this limit by setting horizon.trim.failed
in the config.
That said, it is there to avoid running out of memory, save yourself some headache and configure a database. SQLite is straightforward enough to set up.