I'm currently trying to develop an application that relies heavily on heroku workers (executing a NodeJS script, no choice in switching to Ruby/Rails here) handling long running (1 - 168 hour) background jobs. My issue is that certain jobs may finish in 1 hour, while others may take 168, and I don't want to have to wait for all my workers to be finished to start scaling down as Heroku will charge me for that time on each worker.
I have no issue with the dynos restarting once a day, but I would like to know if it's possible (and if so how) to scale a specific Heroku worker down through the Heroku API or through any other means (perhaps from within the worker process itself, though terminating the process from within only seems to lead to the worker restarting itself, not scaling itself down).
If this is not possible, then I'd like to know if anyone knows how to capture a "scaling down event" (i.e. is some signal sent to the random worker that's to be scaled down, like a SIGTERM or SIGKILL?).
Any advice at all is appreciated.
One way to do this would be to use one-off dynos.
The Heroku API will allow you to create a dyno. Dynos created that way will not be managed by Heroku, meaning if they stop, they're not restarted automatically.
So you could have a monitor process which looks for the number of dynos you have on your app (using the Heroku API again, you can list dynos), and start new ones as required.
Each dyno would perform the work you need it to perform, and switch itself down. The only dyno being scaled up and running permanently would be your monitor process.