node.jsamazon-ec2slackpm2bolts-framework

Bolt-js slack app Jenkins PM2 deployment EADDRINUSE issue


I’m making a slack app with bolt-js, node.js and slack apis.

I’ve set up a Jenkins CI/CD pipeline with PM2 to deploy node.js to AWS EC2, but I’m encountering some issues in the pm2 reloading step.

There are two cores in the EC2 instance, and therefore there are two runs when I run it in PM2 cluster mode.

The following is my ecosystem.config.js :

module.exports = {
  apps: [{
  name: 'project-name',
  cwd: './project-path',
  script: 'npm',
  args: 'start',
  instances: 0,
  exec_mode: 'cluster_mode' 
  }]

When I run the pm2 start ecosystem.config.js command, it runs fine - no port already in use error. But when I reload the pm2 with pm2 reload <appname>, the following error shows:

Error: listen EADDRINUSE: address already in use :::3000
    at Server.setupListenHandle [as _listen2] (node:net:1740:16)
    at listenInCluster (node:net:1788:12)
    at Server.listen (node:net:1876:7)
    at project-path/node_modules/@slack/bolt/dist/receivers/HTTPReceiver.js:177:25
    at new Promise (<anonymous>)
    at HTTPReceiver.start (project-path/node_modules/@slack/bolt/dist/receivers/HTTPReceiver.js:143:16)
    at App.start (project-path/node_modules/@slack/bolt/dist/App.js:241:30)
    at project-path/app.js:320:13
    at Object.<anonymous> (project-path/app.js:322:3)
    at Module._compile (node:internal/modules/cjs/loader:1256:14) {
  code: 'EADDRINUSE',
  errno: -98,
  syscall: 'listen',
  address: '::',
  port: 3000

It says that my port 3000 is already in use and reloads three more times by itself.

When the cluster switches from cluster 0 to cluster 1, the multiple reloads are successful - it does its job and successfully deploys the updated code after multiple reloads, like the following:

[nodemon] app crashed - waiting for file changes before starting...

2023-09-13T15:40:12: PM2 log: Stopping app:project-name id:_old_0
2023-09-13T15:40:12: PM2 log: App name:project-name id:_old_0 disconnected
2023-09-13T15:40:12: PM2 log: App [project-name:_old_0] exited with code [0] via signal [SIGINT]
2023-09-13T15:40:12: PM2 log: pid=13051 msg=process killed
2023-09-13T15:40:12: PM2 log: App [project-name:1] starting in -cluster mode-
2023-09-13T15:40:12: PM2 log: App [project-name:1] online

However, when the cluster switches from cluster 1 to cluster 0, it’s just stuck with the message that the app crashed, like this: [nodemon] app crashed - waiting for file changes before starting... And the new cluster does not start.

[nodemon] app crashed - waiting for file changes before starting...

2023-09-13T15:32:41: PM2 log: Stopping app:project-name id:_old_1
2023-09-13T15:32:41: PM2 log: App name:project-name id:_old_1 disconnected
2023-09-13T15:32:41: PM2 log: App [project-name :_old_1] exited with code [0] via signal [SIGINT]
2023-09-13T15:32:41: PM2 log: pid=12638 msg=process killed

Why is this happening?

Is there a possible workaround for this?


Solution

  • Solved long days ago.. The starting script was wrong.

    Instead of

      script: 'npm',
      args: 'start',
    

    ,

      script: 'app.js',
    

    worked fine.