One of my sidekiq jobs is responsible for triggering a bunch of other jobs in batch, but I'm facing some issues on staging environment because of the volume of the jobs that are being created, this is causing a lot of rate limit errors since one of my jobs is reaching for an external api.
I'm trying to find a way to mitigate this problem
The only thing that I tried so far was to let the jobs retry but some jobs end up taking a lot of time to retry again because they get stuck on the same error
I think the best option for this is to spread your jobs into a a time frame, making your sidekiq workload a lot smother while respecting the api limit rates.
This also benefits other jobs your application is running at the moment making so that your sidekiq does not get locked out of new jobs while running this jobs you enqueued.
Try something like this
class GenericScheduleWorker
include Sidekiq::Worker
def perform
User.all.find_each do |user|
seconds = rand(1..3600).seconds
GenericWorker.perform_in(seconds,user.id)
end
end
end