I want to run a script that is on a separate server within a Gitlab CI job and have the job print the output of that script and depend on the script's result.
I'm using sshpass to get around inputting a password like this:
- sshpass -p "password" ssh -o "StrictHostKeyChecking=no" user@SERVER 'command_to_run'
and I've tried redirecting output just to at least try and see that the command is actually running:
- sshpass -p "password" ssh -o "StrictHostKeyChecking=no" user@SERVER 'command_to_run' > command_log.txt
- cat command_log.txt
but regardless, all I get in the pipeline logs after it runs that line is:
Warning: Permanently added 'SERVER' (ECDSA) to the list of known hosts.
and it isn't even waiting for command_to_run
to complete before moving on.
Is there any way to get the command output logs and depend on the remote command_to_run within a pipeline job?
Would appreciate any advice. Thanks!
Is there any way to get the command output logs
The output of the command should be shown in the Gitlab logs. Check the script if it using some form of redirection e.g. >
depend on the remote command_to_run within a pipeline job?
Assuming nothing goes wrong with ssh itself, its exit code is the exit code of the last command executed on the remote host. (If something does go wrong, its exit code is 255.)
So you could check $?
which contains the exit code of the last command. And depend on it in order to impose a specific logic. E.g.
- sshpass -p "password" ssh -o "StrictHostKeyChecking=no" user@SERVER 'command_to_run'
- |
if [ $? -eq 0 ];
then
...All well case...
Be aware that the exit code of a script
would be the exit code of the last executed command
in that scipt.
If your case is to fail
the job, if even a single error was encoutered in the remote command.
Add set -e
to the start of script, which would make the script fail at the first command error