pythonstringgenerator

Is there a generator version of `string.split()` in Python?


string.split() returns a list instance. Is there a version that returns a generator instead? Are there any reasons against having a generator version?


Solution

  • It is highly probable that re.finditer uses fairly minimal memory overhead.

    def split_iter(string):
        return (x.group(0) for x in re.finditer(r"[A-Za-z']+", string))
    

    Demo:

    >>> list( split_iter("A programmer's RegEx test.") )
    ['A', "programmer's", 'RegEx', 'test']
    

    I have confirmed that this takes constant memory in python 3.2.1, assuming my testing methodology was correct. I created a string of very large size (1GB or so), then iterated through the iterable with a for loop (NOT a list comprehension, which would have generated extra memory). This did not result in a noticeable growth of memory (that is, if there was a growth in memory, it was far far less than the 1GB string).

    More general version:

    In reply to a comment "I fail to see the connection with str.split", here is a more general version:

    def splitStr(string, sep="\s+"):
        # warning: does not yet work if sep is a lookahead like `(?=b)`
        if sep=='':
            return (c for c in string)
        else:
            return (_.group(1) for _ in re.finditer(f'(?:^|{sep})((?:(?!{sep}).)*)', string))
    
        # alternatively, more verbosely:
        regex = f'(?:^|{sep})((?:(?!{sep}).)*)'
        for match in re.finditer(regex, string):
            fragment = match.group(1)
            yield fragment
    
    

    The idea is that ((?!pat).)* 'negates' a group by ensuring it greedily matches until the pattern would start to match (lookaheads do not consume the string in the regex finite-state-machine). In pseudocode: repeatedly consume (begin-of-string xor {sep}) + as much as possible until we would be able to begin again (or hit end of string)

    Demo:

    >>> splitStr('.......A...b...c....', sep='...')
    <generator object splitStr.<locals>.<genexpr> at 0x7fe8530fb5e8>
    
    >>> list(splitStr('A,b,c.', sep=','))
    ['A', 'b', 'c.']
    
    >>> list(splitStr(',,A,b,c.,', sep=','))
    ['', '', 'A', 'b', 'c.', '']
    
    >>> list(splitStr('.......A...b...c....', '\.\.\.'))
    ['', '', '.A', 'b', 'c', '.']
    
    >>> list(splitStr('   A  b  c. '))
    ['', 'A', 'b', 'c.', '']
    
    

    (One should note that str.split has an ugly behavior: it special-cases having sep=None as first doing str.strip to remove leading and trailing whitespace. The above purposefully does not do that; see the last example where sep="\s+".)

    (I ran into various bugs (including an internal re.error) when trying to implement this... Negative lookbehind will restrict you to fixed-length delimiters so we don't use that. Almost anything besides the above regex seemed to result in errors with the beginning-of-string and end-of-string edge-cases (e.g. r'(.*?)($|,)' on ',,,a,,b,c' returns ['', '', '', 'a', '', 'b', 'c', ''] with an extraneous empty string at the end; one can look at the edit history for another seemingly-correct regex that actually has subtle bugs.)

    (If you want to implement this yourself for higher performance (although they are heavweight, regexes most importantly run in C), you'd write some code (with ctypes? not sure how to get generators working with it?), with the following pseudocode for fixed-length delimiters: Hash your delimiter of length L. Keep a running hash of length L as you scan the string using a running hash algorithm, O(1) update time. Whenever the hash might equal your delimiter, manually check if the past few characters were the delimiter; if so, then yield substring since last yield. Special case for beginning and end of string. This would be a generator version of the textbook algorithm to do O(N) text search. Multiprocessing versions are also possible. They might seem overkill, but the question implies that one is working with really huge strings... At that point you might consider crazy things like caching byte offsets if few of them, or working from disk with some disk-backed bytestring view object, buying more RAM, etc. etc.)