pythonstringpattern-matchingstring-matchingdna-sequence

Search for string allowing for one mismatch in any location of the string


I am working with DNA sequences of length 25 (see examples below). I have a list of 230,000 and need to look for each sequence in the entire genome (toxoplasma gondii parasite). I am not sure how large the genome is, but much longer than 230,000 sequences.

I need to look for each of my sequences of 25 characters, for example, (AGCCTCCCATGATTGAACAGATCAT).

The genome is formatted as a continuous string, i.e. (CATGGGAGGCTTGCGGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTTGCGGAGTGCGGAGCCTGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTT....)

I don't care where or how many times it is found, only whether it is or not.
This is simple I think --

str.find(AGCCTCCCATGATTGAACAGATCAT)

But I also what to find a close match defined as wrong (mismatched) at any location, but only one location, and record the location in the sequence. I am not sure how do do this. The only thing I can think of is using a wildcard and performing the search with a wildcard in each position. I.e., search 25 times.

For example,

AGCCTCCCATGATTGAACAGATCAT    
AGCCTCCCATGATAGAACAGATCAT

A close match with a mismatch at position 13.

Speed is not a big issue because I am only doing it 3 times, though it would be nice if it was fast.

There are programs that do this -- find matches and partial matches -- but I am looking for a type of partial match that is not discoverable with these applications.

Here is a similar post for perl, although they are only comparing sequences and not searching a continuous string :

Related post


Solution

  • Before you read on, have you looked at biopython?

    It appears that you want to find approximate matches with one substitution error, and zero insertion/deletion errors i.e. a Hamming distance of 1.

    If you have a Hamming distance match function (see e.g. the link provided by Ignacio), you could use it like this to do a search for the first match:

    any(Hamming_distance(genome[x:x+25], sequence) == 1 for x in xrange(len(genome)))
    

    but this would be rather slow, because (1) the Hamming distance function would keep on grinding after the 2nd substitution error (2) after failure, it advances the cursor by one rather than skipping ahead based on what it saw (like a Boyer-Moore search does).

    You can overcome (1) with a function like this:

    def Hamming_check_0_or_1(genome, posn, sequence):
        errors = 0
        for i in xrange(25):
            if genome[posn+i] != sequence[i]:
                errors += 1
                if errors >= 2:
                    return errors
        return errors 
    

    Note: that's intentionally not Pythonic, it's Cic, because you'd need to use C (perhaps via Cython) to get reasonable speed.

    Some work on bit-parallel approximate Levenshtein searches with skipping has been done by Navarro and Raffinot (google "Navarro Raffinot nrgrep") and this could be adapted to Hamming searches. Note that bit-parallel methods have limitations on length of query string and alphabet size but yours are 25 and 4 respectively so no problems there. Update: skipping probably not much help with an alphabet size of 4.

    When you google for Hamming distance search, you will notice lots of stuff about implementing it in hardware, and not much in software. This is a big hint that maybe whatever algorithm you come up with ought to be implemented in C or some other compiled language.

    Update: Working code for a bit-parallel method

    I've also supplied a simplistic method for helping with the correctness checking, and I've packaged a variation of Paul's re code for some comparisons. Note that using re.finditer() delivers non-overlapping results, and this can cause a distance-1 match to shadow an exact match; see my last test case.

    The bit-parallel method has these features: guaranteed linear behaviour O(N) where N is text length. Note naive method is O(NM) as is the regex method (M is the pattern length). A Boyer-Moore-style method would be worst case O(NM) and expected O(N). Also the bit-parallel method can be used easily when input has to be buffered: it can be fed a byte or a megabyte at a time; no look-ahead, no problems with buffer boundaries. The big advantage: the speed of that simple per-input-byte code when coded in C.

    Downsides: the pattern length is effectively limited to the number of bits in a fast register e.g. 32 or 64. In this case the pattern length is 25; no problem. It uses extra memory (S_table) proportional to the number of distinct characters in the pattern. In this case, the "alphabet size" is only 4; no problem.

    Details from this technical report. The algorithm there is for approximate search usin Levenshtein distance. To convert to using Hamming distance, I simply (!) removed the pieces of statement 2.1 that handle insertion and deletion. You'll notice lots of reference to "R" with a "d" superscript. "d" is distance. We need only 0 and 1. These "R"s become the R0 and R1 variables in the code below.

    # coding: ascii
    
    from collections import defaultdict
    import re
    
    _DEBUG = 0
    
    
    # "Fast Text Searching with Errors" by Sun Wu and Udi Manber
    # TR 91-11, Dept of Computer Science, University of Arizona, June 1991.
    # http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.8854
    
    def WM_approx_Ham1_search(pattern, text):
        """Generate (Hamming_dist, start_offset)
        for matches with distance 0 or 1"""
        m = len(pattern)
        S_table = defaultdict(int)
        for i, c in enumerate(pattern):
            S_table[c] |= 1 << i
        R0 = 0
        R1 = 0
        mask = 1 << (m - 1)
        for j, c in enumerate(text):
            S = S_table[c]
            shR0 = (R0 << 1) | 1
            R0 = shR0 & S
            R1 = ((R1 << 1) | 1) & S | shR0
            if _DEBUG:
                print "j= %2d msk=%s S=%s R0=%s R1=%s" \
                    % tuple([j] + map(bitstr, [mask, S, R0, R1]))
            if R0 & mask: # exact match
                yield 0, j - m + 1
            elif R1 & mask: # match with one substitution
                yield 1, j - m + 1
    
    if _DEBUG:
    
        def bitstr(num, mlen=8):
           wstr = ""
           for i in xrange(mlen):
              if num & 1:
                 wstr = "1" + wstr
              else:
                 wstr = "0" + wstr
              num >>= 1
           return wstr
    
    def Ham_dist(s1, s2):
        """Calculate Hamming distance between 2 sequences."""
        assert len(s1) == len(s2)
        return sum(c1 != c2 for c1, c2 in zip(s1, s2))
    
    def long_check(pattern, text):
        """Naively and understandably generate (Hamming_dist, start_offset)
        for matches with distance 0 or 1"""
        m = len(pattern)
        for i in xrange(len(text) - m + 1):
            d = Ham_dist(pattern, text[i:i+m])
            if d < 2:
                yield d, i
    
    def Paul_McGuire_regex(pattern, text):
        searchSeqREStr = (
            '('
            + pattern
            + ')|('
            + ')|('.join(
                pattern[:i]
                + "[ACTGN]".replace(c,'')
                + pattern[i+1:]
                for i,c in enumerate(pattern)
                )
            + ')'
            )
        searchSeqRE = re.compile(searchSeqREStr)
        for match in searchSeqRE.finditer(text):
            locn = match.start()
            dist = int(bool(match.lastindex - 1))
            yield dist, locn
    
    
    if __name__ == "__main__":
    
        genome1 = "TTTACGTAAACTAAACTGTAA"
        #         01234567890123456789012345
        #                   1         2
    
        tests = [
            (genome1, "ACGT ATGT ACTA ATCG TTTT ATTA TTTA"),
            ("T" * 10, "TTTT"),
            ("ACGTCGTAAAA", "TCGT"), # partial match can shadow an exact match
            ]
    
        nfailed = 0
        for genome, patterns in tests:
            print "genome:", genome
            for pattern in patterns.split():
                print pattern
                a1 = list(WM_approx_Ham1_search(pattern, genome))
                a2 = list(long_check(pattern, genome))
                a3 = list(Paul_McGuire_regex(pattern, genome))
                print a1
                print a2
                print a3
                print a1 == a2, a2 == a3
                nfailed += (a1 != a2 or a2 != a3)
        print "***", nfailed