node.jsexpressnode-cluster

Is it a good practice to start http server on one port for Cluster forks for 1-3k requests per second?


Is it a good way to create a high loaded server on node.js? I need to start the high loaded (1-3k requests) http server to handle Post requests. As a backend i chosen node.js with express. I have created one shared http server for all forked processes. Locally tests gives me a result in 3.75 second per 3000 requests.

    const cluster = require('cluster');
    const port = 3000;
    var express = require('express');
    var cors = require('cors');
    var queryController = require('./controllers/queryController');
    var app = express();
    app.set('view engine', 'ejs');
    app.use(cors());

    if (cluster.isMaster) {
       const cpuCount = require('os').cpus().length;
       // Forking.
       for (let i = 0; i < cpuCount; i++) {
           cluster.schedulingPolicy = cluster.SCHED_NONE;
           cluster.fork();
       }
       cluster.on('fork', (worker) => {
           console.log(Worker #${worker.id} is up!);
       });
       cluster.on('listening', (worker, address) => {
           worker.on('message', (msg)=>{
           ...
           } );
       });
       cluster.on('disconnect', (worker) => {
           console.log(The worker #${worker.id} has disconnected);
       });
       cluster.on('exit', (worker) => {
           console.log(Worker ${worker.id} is dead);
           cluster.fork();
       });
    } else {
//All forks listen 3000 port and each uses own controller to handle a requests
       queryController(app,cors);
       app.listen(port, function(){
       // console.log("Listening on port 3000!");
       });
       process.on('uncaughtException', (err) => {
           console.error(err.message);
           console.error(err.stack);
           process.exit(1);
       });
    }

Solution

  • When you exceed the scale of one single node.js process, it is a good practice to scale on one server using local clustering (all on the same port) if your node.js CPU processing is indeed the bottleneck (and not some shared resource like your database server that should itself be clustered or scaled). That is the recommended scheme from the cluster module that is built-into node.js.

    When you exceed that scale, it is a good practice to cluster with multiple servers using some sort of load balancer/proxy to spread the load. When you exceed that scale, you may scale geographically so that requests from different locales go to different data centers that are closer to them on the network (such as US, Europe, Africa, Asia, etc...).

    There may be some scenarios (it all depends upon where the bottlenecks are) where using a high performance load balancer to spread the load to separate server processes (on the same computer) on separate ports is better than having all of the processes share one port. This would ultimately only be settled with testing.