I'm using a Bitbucket pipeline to deploy my Node application to an EC2 instance running linux. I created a script to start my application:
# Source nvm and load Node.js environment for root user
export NVM_DIR="/home/ec2-user/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
# Navigate to the API directory
cd /api
# Start the node server
npm start
I'm calling that script from my AppSpec.yml as follows:
hooks:
AfterInstall:
- location: deploy/nodestart.sh
timeout: 30
runas: root
The pipeline execution fails. Via the AWS Console I can see that the start of the application actually succeeded, but the start of the Node server is a blocking event. The script hangs and then it exceeds the timeout, is terminated and then the deployment fails.
How do I start Node asynchronously to the deployment so that the deployment terminates cleanly? Thanks for your advice!
Ultimately, I set this to run as a service, with a lcapi.service definition file inside of /etc/systemd/system.
I copy this in as part of the pipeline deployment. These commands enable the service, reload the new service definition for the OS, and then start the service.
systemctl enable /etc/systemd/system/lcapi.service
systemctl daemon-reload
systemctl start lcapi.service
to see the status of your running service:
systemctl status lcapi.service
For more information on setting up linux services, see: https://natancabral.medium.com/run-node-js-service-with-systemd-on-linux-42cfdf0ad7b2