ruby-on-railspostgresqldeploymentcluster-computingcapistrano

Prevent jobs on the cluster from running on production code during deployment


I have a script that runs for a few minutes as a job on the cluster in the production environment. There are between 0 and 100 such jobs, each with 1 script per job, running at the same time on the cluster. Usually, there are no jobs running, or a burst of about 4-8 such jobs.

I want to prevent such jobs from running when I deploy a new version of the code into production.

How do I do that to optimize maintainability?

My initial idea was this:

  1. Use a semaphore file or a lock file that is created at the beginning of deployment and then removed after the code has been deployed. Deploy runs for 0.5 - 10 min, depending on the complexity of the current deploy tasks.
  2. This lock file is also automatically deleted by a separate cron job after, for example, 30 min, if deploy fails to remove this file. For example, if the deploy in rudely killed, this file should not hang around forever blocking the jobs. That is, the file is deleted by a separate cron job if it is older than 30 minutes.
  3. The production code checks for this lock file and waits until it is gone. So the jobs wait no more than 30 min.

I am concerned about possible race conditions, and considering maybe using a database-based solution. In the case of my application, I would use postgreSQL. This database-based solution may be more complex to implement and maintain, but may be less probe to race conditions.

Perhaps there is a standard mechanism to achieve this in Capistrano, which is used for deployment of this code?

Notes:

FAQs:

Is there a particular reason these jobs shouldn't run during deployment?

I had cases when multiple jobs would run during mid-deployment, and fail because of that. Finding and rerunning such failed jobs is time-consuming. Delaying them during deployment carries only a small and rare performance hit, and is by far the most acceptable solution. For our system, maintainability is priority number one.


Solution

  • Working with advisory locks at simplest level using psql.

    Session 1

    select pg_advisory_lock(3752667);
    

    Contents of advisory_lock_test.sql file:

    select pg_advisory_lock(3752667);
    select "VendorID" from nyc_taxi_pl limit 10;
    

    Then session 2:

    psql -d test -U postgres -p 5452 -f advisory_lock_text.sql 
    Null display is "NULL".
    

    Then in session 1:

    select pg_advisory_unlock(3752667);
    

    Back to session 2:

    Null display is "NULL".
     pg_advisory_lock 
    ------------------
     
    (1 row)
    
     VendorID 
    ----------
            1
            2
            2
            2
            2
            2
            1
            1
            2
            2
    (10 rows)
    

    Note:

    The below is using session level locks. Transaction locks are also available using pg_advisory_xact_lock

    Basically you create a lock in a session with pg_advisory_lock(3752667) where the number can be one 64 bit integer of two 32 bit integers. These could come from values that you fetch from a table so a number is scoped to a particular action e.g. select pg_advisory_lock((select lock_number from a_lock where action = 'deploy'));. Then in the second or other sessions you try to acquire a lock on the same number. If the number is in use, not unlocked or the original session did not exit, the other sessions will wait until the original session releases the lock. At that point the rest of the commands will run.

    In your case create a number, possibly in a table, that is associated with deploying. When you run the deployments lock on the number before you run the changes, then unlock at end of deployment. If the deployment fails and the session ends the lock will also be released The other scripts would also need to start with attempting to lock on that number also. If it is in use they will wait until it is released and then run the rest of the script commands and unlock. How manageable this is depends on the number of scripts you are dealing with and getting people to stick to the process.