nfssnakemakeqsubtorque

Snakemake --use-conda with --cluster and NFS4 storage


I am using snakemake in cluster mode to submit a simple one rule workflow to the HPCC, which runs Torque with several compute nodes. The NFSv4 storage is mounted on /data. There is a link /PROJECT_DIR -> /data/PROJECT_DIR/

I submit the job using:

snakemake --verbose --use-conda --conda-prefix /data/software/miniconda3-ngs/envs/snakemake \
--rerun-incomplete --printshellcmds --latency-wait 60  \ 
--configfile /PROJECT_DIR/config.yaml -s '/data/WORKFLOW_DIR/Snakefile' --jobs 100 \
--cluster-config '/PROJECT_DIR/cluster.json' \
--cluster 'qsub -j oe -l mem={cluster.mem} -l walltime={cluster.time} \
                      -l nodes={cluster.nodes}:ppn={cluster.ppn}'

The jobs fails with:

Error in rule fastqc1:                                      
    jobid: 1                                          
    output: /PROJECT_DIR/OUTPUT_DIR/SAMPLE_fastqc.html                                    
    conda-env: /data/software/miniconda3-ngs/envs/snakemake/74019bbc                     
    shell: 
                                                                                
        fastqc -o /PROJECT_DIR/OUTPUT_DIR/ -t 4 -f fastq /PROJECT_DIR/INPUT/SAMPLE.fastq.gz 
    (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
    cluster_jobid: 211078.CLUSTER

    Error executing rule fastqc1 on cluster (jobid: 1, external: 211078.CLUSTER, jobscript:
    PROJECT_DIR/.snakemake/tmp.t5a2dpxe/snakejob.fastqc1.1.sh). For error details see the cluster
    log and the log files of the involved rule(s).

The jobscript submitted looks like this:

Jobscript: 
#!/bin/sh                                             
# properties = {"type": "single", "rule": "fastqc1", "local": false, "input": 
  ["/PROJECT_DIR/INPUT_DIR/SAMPLE.fastq.gz"], "output": ["/PROJECT_DIR/OUTPUT_DIR/SAMPLE_fastqc.html"],
  "wildcards": {"sample": "SAMPLE", "read": "1"},
  "params": {}, "log": [], "threads": 4, "resources": {}, "jobid": 1, "cluster": {"nodes": 1,
  "ppn": 4, "time": "01:00:00", "mem": "32gb"}}                                         
  
  cd /data/PROJECT_DIR && \
  PATH='/data/software/miniconda3-ngs/envs/snakemake-5.32.2/bin':$PATH \
  /data/software/miniconda3-ngs/envs/snakemake-5.32.2/bin/python3.8 \ 
  -m snakemake /PROJECT_DIR/OUTPUT_DIR/SAMPLE_fastqc.html --snakefile /data/WORKFLOW_DIR/Snakefile \
  --force -j --keep-target-files --keep-remote --max-inventory-time 0 \                  
  --wait-for-files /data/PROJECT_DIR/.snakemake/tmp.t5a2dpxe \
  /PROJECT_DIR/INPUT/SAMPLE.fastq.gz /data/software/miniconda3-ngs/envs/snakemake/74019bbc --latency-wait 60 \ 
  --attempt 1 --force-use-threads --scheduler ilp \
  --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ \  
  --configfiles /PROJECT_DIR/config.yaml -p --allowed-rules fastqc1 --nocolor --notemp --no-hooks --nolock \    
  --mode 2  --use-conda --conda-prefix /data/software/miniconda3-ngs/envs/snakemake  \
  && touch /data/PROJECT_DIR/.snakemake/tmp.t5a2dpxe/1.jobfinished || \
  (touch /data/PROJECT_DIR/.snakemake/tmp.t5a2dpxe/1.jobfailed; exit 1) 

Somehow when using an interactive qsub shell to run the workflow locally on a single computing node, this problem does not occur. It only happens when submitting the job to the entire computing cluster from the login node.

snakemake versions tested:


Solution

  • Solved by providing a jobscript (--jobscript SCRIPT) with:

    #!/bin/bash
    # properties = {properties}
    set +u;
    source /data/software/miniconda3-ngs/etc/profile.d/conda.sh;
    conda activate snakemake-5.32.2
    set -u;
    {exec_job}