tomcatjenkinscontinuous-integrationjenkins-pipelinecloudbees

Jenkins pipeline: wait for another job to pause


I have several testing projects (API tests, UI Selenium tests etc), which test the same application. For now, the steps of app. deployment are duplicated in each jenkins file.

The goal is to have a common app-deployment job, which prepares the application on the server:

I cannot just finish the build after this, as Jenkins will kill all created processes, including started Tomcat. I tried doNoKillMe cookie for Tomcat, but this caused several other issues related to restarting the app and DB connections. So, I put the input step with an ID when the app is ready:

node {
    properties([
            parameters([
                    string(name: 'REVISION',
                            defaultValue: '',
                    ),
                    string(name: 'TAG_NAME',
                            defaultValue: ''),
            ]),
    ])
    withEnv(buildEnvVariables()) {
        stage('Checkout') {
        }

        stage('Prepare file system') {
        }

        stage('Prepare database') {
        }

        stage('Builds WAR files') {
        }

        stage('Tomcat deploy') {
            // Delete previous WAR files from Tomcat
            // Copy generated files to tomcat
            sh "$TOMCAT_DIR/bin/startup.sh"
            waitForTokenFileOrFail()
        }
        input(id: 'IsBuilt', 'Application is ready... ')
    }
}

Now, I want this job to be called from several other testing-specific jenkins files:

node {
    properties([
            parameters([
                    string(name: 'TESTS_SUITE',
                            defaultValue: '',
                    ),
                    string(name: 'OTHER_PARAM',
                            defaultValue: ''),
            ]),
    ])
    withEnv(buildEnvVariables()) {
        stage('Checkout') {
        }

        stage('Stop any running application builds') {
            def jenkinsQueue = Jenkins.instance.queue
            jenkinsQueue.items.findAll { it.task.name.startsWith(contextDeployBuildName) }.each {
                echo "Found pending $contextDeployBuildName job. Cancelling: ${it.getId()}"
                jenkinsQueue.doCancelItem(it.getId())
            }

            Jenkins.instance.getItemByFullName(contextDeployBuildName)
                    .getAllJobs().first().getBuilds()
                    ?.each { build ->
                        if (build.isBuilding()) {
                            try {
                                echo "Found running $contextDeployBuildName job. Stopping: ${build.number}"
                                httpRequest(
                                        httpMode: 'POST',
                                        authentication: 'credentialsID',
                                        url: "${JENKINS_URL}job/$contextDeployBuildName/${build.number}/stop")
                            } catch (any) {
                                println any.message
                            }
                        }
                    }
        }

        stage('Build the app') {
            build(wait: true, job: contextDeployBuildName, parameters: [
                    string(name: 'REVISION', value: env.BRANCH_NAME),
                    string(name: 'TAG_NAME', value: env.TAG_NAME)])

            // How to wait for a specific input/condition here???
        }

        stage('Run tests') {
        }

        stage('Report, cleanup') {
        }
    }
}

I'm using the jenkins build step with a wait condition, but seems like it can wait only for the job to finish. Changing deployment job's status is ignored (I do believe, because build listeners are not triggered):

currentBuild.rawBuild.@result = hudson.model.Result.SUCCESS

Question:

How to properly inform the testing-specific job about triggered input in app-deployment job? Is there a standard way of solving such 'deployment-testing' issue?

For now I have two ugly ways to check the status apart from pipeline tools:

  1. Wait for some specific file in Tomcat/application folders
  2. When an input with specified ID is triggered, it's page can be accesses via that ID so I can ping input's page until it returns 200 status instead of 404.

Solution

  • A bit of history: some years ago, there were regular Jenkins jobs, and people used to see these as building blocks and trigger one from the other. This wasn't very convenient, long story short, a pipeline was born.

    Pipelines are not supposed to trigger one another, because if you have so much dependency between the two of your jobs where one has to wait and signal the other, it might be better to see them as a part of a single pipeline.

    Code duplication is an issue, but it can be dealt with, if you look at all your steps as a part of a bigger pipeline. Something to the tune of this (copy-pasting from your code into declarative pipeline):

    pipeline {
        agent any
        parameters {
            string(name: 'REVISION', defaultValue: '')
            string(name: 'TAG_NAME', defaultValue: '')
            choice(name: 'TEST_SUITE', choices: ['selenium', 'api', 'etc'])
        }
        stages {
            stage('Checkout') {
            }
    
            stage('Prepare file system') {
            }
    
            stage('Prepare database') {
            }
    
            stage('Builds WAR files') {
            }
    
            stage('Tomcat deploy') { 
            agent { label "tomcat" } 
            steps { script { 
                sh "$TOMCAT_DIR/bin/startup.sh"
            }}}
    
            // start testing
            stage('Testing') {
                parallel {
                    stage('Selenium') {
                        when { equals expected: "selenium", actual: params.TEST_SUITE }}
                        agent { label "selenium-slave" }
                        steps {
                            echo "Doing selenium tests"
                        }
                    }
                    stage('API') {
                        when { equals expected: "api", actual: params.TEST_SUITE }}
                        agent { label "api-slave" }
                        steps {
                            echo "Doing api tests"
                        }
                    }
                }
            }
            stage('Cleanup') { 
            agent { label "tomcat" } 
            steps { script { 
                sh "$TOMCAT_DIR/bin/shutdown.sh"
            }}}
        }
    }
    

    Edit: it might be kind of a problem to make your 'start tomcat' and 'stop tomcat' fall on exactly the same node, but a bit of parallelization and post { } block can solve that, too.