pythonglobsnakemake

Define input files for snakemake using glob


I am building a snakemake pipeline for some bioinformatics analyses, and I'm a beginner with the tool. The end users will be mainly biologists with little to no IT training, so I'm trying to make it quite user-friendly, in particular not needing much information in the config file (a previous bioinformatician in the institute had built a more robust pipeline but that required a lot of information in the config file, and it fell into disuse).

One rule that I would like to implement is to autodetect what .fastq (raw data) files are given in their specific directory, align them all and run some QC steps. In particular, deepTools has a plotFingerprint tool that compares the distribution of data in a control data file to the distribution in the treatment data files. For this, I would like to be able to autodetect which batches of data files go together as well.

My file architecture is set up like so: DATA/<FILE TYPE>/<EXP NAME>/<data files>, so for example DATA/FASTQ/CTCF_H3K9ac/ contains:

CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
CTCF_T7_neg_2.fq.gz
CTCF_T7_neg_3.fq.gz
CTCF_T7_pos_2.fq.gz
CTCF_T7_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
H3K9ac_T7_neg_2.fq.gz
H3K9ac_T7_neg_3.fq.gz
H3K9ac_T7_pos_2.fq.gz
H3K9ac_T7_pos_3.fq.gz
Input_T1_pos.fq.gz
Input_T7_neg.fq.gz
Input_T7_pos.fq.gz

For those not familiar with ChIP-seq, each Input file is a control data file for normalisation, and CTCF and H3K9ac are experimental data to be normalised. So one batch of files I would like to process and then send to plotFingerprint would be

Input_T1_pos.fq.gz
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz

With that in mind, I would need to give to my fingerprint_bam snakemake rule the path to the aligned versions of those files, i.e.

DATA/BAM/CTCF_H3K9ac/Input_T1_pos.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_3.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_3.bam

(I would also need each of those files indexed, so all of those again with the .bai suffix for the snakemake input, but that's trivial once I've managed to get all the .bam paths. The snakemake rules I have to get up to that point all work, I've tested them independantly.)

There is also a special case where an experiment could be run using paired-end sequencing, so the FASTQ dir would contain exp_fw.fq.gz and exp_rv.fq.gz and would need to be mapped to exp_pe.bam, but that doesn't seem like a massive exception to handle.

Originally I had tried using list comprehensions to create the list of input files, using this:

def exps_from_inp(ifile): # not needed?
    path, fname = ifile.split("Input")
    conds, ftype = fname.split(".", 1)
    return [f for f in glob.glob(path+"*"+conds+"*."+ftype)]
    

def bam_name_from_fq_name(fqpath, suffix=""):
    if re.search("filtered", fqpath) :
        return # need to remove files that were already filtered that could be in the same dir
    else:
        return fqpath.replace("FASTQ", "BAM").replace(".fq.gz", ".bam") + suffix

rule fingerprint_bam:
    input:
        bam=[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
        bai=[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
    ...

Those list comprehensions generated the correct list of files when I tried them in python, using the values that expdir and expconds take when I dry run the pipeline. However, during that dry run, the {input.bam} wildcard in the shell command never gets assigned a value.

I went digging in the docs and found this page which implies that snakemake does not handle list comprehensions, and the expand function is its replacement. In my case, the experiment numbers (the _2 and _3 in the file names) are pretty variable, they're sometimes just random numbers, some experiments have 2 reps and some have 3, ... All these factors mean that using expand without a lot of additional considerations would be tricky (for the rep number, finding the experiment names would be fairly easy).

I then tried wrapping the list comprehensions in a function and running those in the input of my rule, but those failed, as did wrapping those function in one big one and using unpack (although I could be using that wrong, I'm not entirely sure I understood how unpack works).

def get_fingerprint_bam_inputfiles(wildcards):
    return {"bams": get_fingerprint_bam_bams(wildcards),
            "bais": get_fingerprint_bam_bais(wildcards)}

def get_fingerprint_bam_bams(wildcards):
    return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

def get_fingerprint_bam_bais(wildcards):
    return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

rule fingerprint_bam:
    input:
        bams=get_fingerprint_bam_bams,
        bais=get_fingerprint_bam_bais
    ...

rule fingerprint_bam_unpack:
    input:
        unpack(get_fingerprint_bam_inputfiles)
    ...

So now I'm feeling pretty stuck in this approach. How can I autodetect these experiment batches and give the correct bam file paths to my fingerprint_bam rule? I'm not even sure which approach I should go for.

EDIT - fixed return statement in input functions. However, the shell command still fails to complete the list of input files.

rule fingerprint_bam:
    input:
        bams=get_fingerprint_bam_bams,
        bais=get_fingerprint_bam_bais
    output:
        "FIGURES/QC/{expdir}/{expconds}_fingerprint.svg"
    log:
        "snakemake_logs/fingerprint_bam/{expdir}_{expconds}.log"
    shell:
        "plotFingerprint --bamfiles {input.bams} -o {output} --ignoreDuplicates --plotTitle 'Fingerprint of ChIP-seq data' 2>{log}"

Gives this output with snakemake -np:

plotFingerprint --bamfiles  -o FIGURES/QC/CTCF_H3K9ac/T7_pos_fingerprint.svg --ignoreDuplicates --plotTitle 'Fingerprint of ChIP-seq data' 2>snakemake_logs/fingerprint_bam/CTCF_H3K9ac_T7_pos.log

So my wildcards get the correct values, but still no bam files are identified.

EDIT 2 - I got it to work! But I still don't know why my original solution didn't work.

I discarded using Snakemake's "wildcards in strings" utility, so my input functions now look like this:


def get_fingerprint_bam_bams(wildcards):
    # return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
    return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/"+wildcards.expdir+"/Input_"+wildcards.expconds+".fq.gz")]

def get_fingerprint_bam_bais(wildcards):
    # return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
    return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/"+wildcards.expdir+"/Input_"+wildcards.expconds+".fq.gz")]

which properly detects files and propagates them to the relevant rules.

So the question is now... why did the in-string wildcard integration not work? Visual Studio Code even highlighted them for me :(


Solution

  • it's me again. You're running into a basic misunderstanding of the order in which Snakemake resolves values in the Snakefile. It's an understandable misunderstanding, as it's not really obvious, but in order to make the type of rules you are trying to create you'll need to have a firm grip on things.

    To try and put it as succinctly as possible, Snakemake always runs in three phases:

    1. Parse the Snakefile and evaluate in-line code to get a collection of rules
    2. Determine the target and construct a DAG of jobs
    3. Execute the DAG (skipped if you use --dry-run)

    In your first attempt, the bam_name_from_fq_name function ends up getting called in phase 1, and it can never work because {expconds} has no value at this point - Snakemake doesn't think about wildcards values or turning rules into jobs until phase 2.

    In your second attempt, you now have an input function which is the correct approach. When an input to a rule is a function reference (ie. you put input: get_fingerprint_bam_bams, not input: get_fingerprint_bam_bams()) then Snakemake defers calling that function until phase 2. Basically, the input function is now called in place of Snakemake's usual logic of just replacing all the wildcards in the input strings, and will be run for every job (rather then just once per rule).

    Your second answer is definitely on the right track. I see two of your functions have no return statements. Is the fix as simple as that? I can't see any other obvious problems. What you read about needing to use expand() is a red herring, you can use any Python code you like in your input functions.

    A more general comment - I've written a bunch of pipelines where the inputs and outputs start to get complex and there is a rats nest of functions to resolve them (eg. https://github.com/EdinburghGenomics/hesiod/blob/master/Snakefile.main) and in this case I've taken the approach to scan for the files and resolve all that first with a stand-alone Python script that saves the results into a YAML (or JSON) file. Then the workflow loads this file to see what it needs to do. This results in a bit more code, and an extra script to run prior to Snakemake, but it has worked well for me.