node.jsmultithreadingconcurrencypm2node-cluster

Node clusters with pm2 queues every other request


Im using pm2 to manage concurrent requests to my API, so far so good, I've managed to make it work.

My api has only one route. Each request takes between 1-2 min to completely resolve and send back the response. As soon as I make my first request, I can see in the pm2 logs that the request has been accepted, but if I make a second request to the same route it gets queued and only gets processed after the first is completed. Only in case I make a third request to the same route while the second request is in queue, another worker is called and accepts the third request, the second stays in queue until the first gets resolved. I hope I made myself clear

first request is accepted promptly by a worker, second request gets queued and third request gets also promply accepted by another worker, fourth gets queued, fifth gets accepted, sixth gets queued and so on.

I have 24 available workers.

here is my very simple server:

const express = require('express');
const runner = require('./Rrunner2')
const moment = require('moment');
const bodyParser = require('body-parser');



const app = express();
app.use(express.json({limit: '50mb', extended: true}));
app.use(bodyParser.json())


app.post('/optimize', (req, res) => {
    try{
        
        const req_start = moment().format('DD/MM/YYYY h:mm a');
        console.log('Request received at ' + req_start)
        console.log(`Worker process ID - ${process.pid} has accepted the request.`);
    
        const data = req.body;
        
        const opt_start = moment().format('DD/MM/YYYY h:mm a')
        console.log('Optimization started at ' + opt_start)
        let result = runner.optimizer(data);

        const opt_end = moment().format('DD/MM/YYYY h:mm a')
        console.log('Optmization ended at ' + opt_end)
        
        const res_send = moment().format('DD/MM/YYYY h:mm a');
        console.log('Response sent at ' + res_send)
        return res.send(result)

    }catch(err){
        console.error(err)
        return res.status(500).send('Server error.')
    }


});

const PORT = 3000;
app.listen(PORT, () => console.log(`Server listening port ${PORT}.`))

my PM2 ecosystem file is:

module.exports = {
  apps : [{
    name: "Asset Optimizer",
    script: "server.js",
    watch: true,
    ignore_watch: ["optimizer", "outputData", "Rplots.pdf", "nodeSender.js", "errorLog"],
    instances: "max",
    autorestart: true,
    max_memory_restart: "1G",
    exec_mode: "cluster",
    watch_options:{
      "followSymlinks": false
      }
    }
  ]}

I start the server using pm2 start ecosystem.config.js

everything works just fine, but this queue issue is making me crazy. I've tried many many dirty approaches, including splitting routes, splitting servers. no success whatsoever.

Even if you don´t know the answer to this, please give me some ideas on how to overcome this problem. Thank you very much.

UPDATE

Okay, I've managed to make this work with the native cluster module by setting:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

cluster.schedulingPolicy = cluster.SCHED_RR;

But once I try to make pm2 start the server, it no longer works. Is it possible to make pm2 accept the Round Robin appproach.

P.S.: I'm using windows,and found in the node docs that it is the onlu platform that this is not setup by default.


Solution

  • The only viable solution to this issue was implementing nginx as a reverse proxy and a load balancer.

    I used nginx 1.18.0 and this is the configuration file that made it work. If anyone comes by this issue, nginx + pm2 is the way to go. Happy to clarify further if anyone faces this. It gave me a lot of work.

    worker_processes  5;
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       mime.types;
        default_type  application/octet-stream;
    
        upstream expressapi {
        least_conn;
        server localhost:3000;
        server localhost:3001;
        server localhost:3002;
        server localhost:3003;
        server localhost:3004;
    }
    
    
        sendfile        on;
    
        keepalive_timeout  800;
        fastcgi_read_timeout 800;
        proxy_read_timeout 800;
        
    
        server {
            listen 8000;
    
            server_name optimizer;
    
            location / {
                proxy_pass http://expressapi/;
            }
            
    
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   html;
            }
    
    }