redisscheduled-taskspython-rq

Python rq-scheduler: enqueue a failed job after some interval


I am using python RQ to execute a job in the background. The job calls a third-party rest API and stores the response in the database. (Refer the code below)

@classmethod
def fetch_resource(cls, resource_id):
   import requests

   clsmgr = cls(resource_id)

   clsmgr.__sign_headers()

   res = requests.get(url=f'http://api.demo-resource.com/{resource_id}', headers=clsmgr._headers)
   
   if not res.ok:
     raise MyThirdPartyAPIException(res)

   ....

The third-party API is having some rate limit like 7 requests/minute. I have created a retry handler to gracefully handle the 429 too many requests HTTP Status Code and re-queue the job after the a minute (the time unit changes based on rate limit). To re-queue the job after some interval I am using the rq-scheduler. Please find the handler code attached below,

def retry_failed_job(job, exc_type, exc_value, traceback):

   if isinstance(exc_value, MyThirdPartyAPIException) and exc_value.status_code == 429:

     import datetime as dt

     sch = Scheduler(connection=Redis())

     # sch.enqueue_in(dt.timedelta(seconds=60), job.func_name, *job.args, **job.kwargs)

I am facing issues in re-queueing the failed job back into the task queue. As I can not directly call the sch.enqueue_in(dt.timedelta(seconds=60), job) in the handler code (As per the doc, job to represent the delayed function call). How can I re-queue the job function with all the args and kwargs?


Solution

  • Ahh, The following statement does the work,

    sch.enqueue_in(dt.timedelta(seconds=60), job.func, *job.args, **job.kwargs)

    The question is still open let me know if any one has better approach on this.