laraveljobslaravel-queue

How to ensure chained Laravel jobs run on the same server in a multi-worker setup?


In my Laravel project, I have 5–6 different queues. Each queue triggers the next one in a strict order, forming a processing chain like:

Job1 → Job2 → Job3 → Job4 → Job5

The flow is always triggered by an external request that starts with Job1. Each job performs a file operation (creating/modifying files), and these files are stored locally on the server where the job is executed.

The issue arises when I try to scale horizontally by adding more servers to handle the queues. Since Laravel queues are distributed, there's no guarantee that Job2 will be processed by the same server that handled Job1 — which breaks the flow because Job2 needs access to the local file created by Job1.

What I need: I want to ensure that once Job1 starts on a specific server, the entire job chain (Job2 to Job5) continues on the same server — even in a multi-server setup.

What I’ve tried: Attaching a "machine ID" to the job payload — but this doesn’t help guarantee the job will be picked up by the same machine.

Searching online for "sticky jobs" or "pinning jobs to the same worker/server" — but couldn’t find a clean or official solution.

My question: Has anyone dealt with a similar requirement before? Is there a proven way to pin job chains to the same server in Laravel when running a distributed queue system?


Solution

  • Here's my proof of concept on tackling this.

    Let's say you have 3 servers. This can be physical machine or virtual machines/containers. Each of them will have their own queue.

    Container A → php artisan queue:work --queue=queue1
    
    Container B → php artisan queue:work --queue=queue2
    
    Container C → php artisan queue:work --queue=queue3
    

    In your laravel application, you can do something like this:

    Route::get('/', function () {
        $available_queues = [
            'queue1',
            'queue2',
            'queue3',
        ];
    
        $random_queue = $available_queues[array_rand($available_queues)];
    
        dump($random_queue); // this will be used to process which queue.
    
        \Illuminate\Support\Facades\Bus::chain([
            new \App\Jobs\FirstProcess(),
            new \App\Jobs\SecondProcess(),
            new \App\Jobs\ThirdProcess(),
        ])->onQueue($random_queue)->dispatch(); // specify one of the queue.
    
        return gethostname();
    });
    

    If the $random_queue's value is queue1, then Container A's worker will be executed.

    Here's what the log will look like.

    docker log output

    This shows each of the containers are running their own queues in the proper order.

    The problem with this setup is of course, specifying the --queue option in the queue:work command. I would still recommend using a shared file server (such as S3 or Minio) to easily manage your servers.