pythonstringcompressiontuplescharacter-replacement

How to do Byte Pair Encoding bigram counting and replacements efficiently in Python?


In the Byte Pair Encoding algorithm, there's a replacement step where it changes the character strings delimited by spaces to bigrams.

I.e., given a list of str tuples as such:

[('t', 'h', 'i', 's', '\ue000'), ('c', 'o', 'r', 'p', 'u', 's', '\ue000'), ('i', 'n', '\ue000'), ('t', 'x', 't', 'f', 'i', 'l', 'e', '\ue000'), ('t', 'h', 'e', '\ue000'), ('s', 'e', 'n', 't', 'e', 'n', 'c', 'e', '\ue000'), ('b', 'a', 'r', '\ue000'), ('a', 'n', 'd', '\ue000'), ('i', 's', '\ue000'), ('f', 'o', 'o', '\ue000'), ('f', 'i', 'r', 's', 't', '\ue000'), ('a', '\ue000'), ('.', '\ue000')]

And a string tuple: ('i', 's')

How do I process the list such that it iterates through all the tuple keys and and replace ('i', 's') with ('is')?, i.e. the output Counter will look something like this:

[('t', 'h', 'is', '\ue000'), ('c', 'o', 'r', 'p', 'u', 's', '\ue000'), ('i', 'n', '\ue000'), ('t', 'x', 't', 'f', 'i', 'l', 'e', '\ue000'), ('t', 'h', 'e', '\ue000'), ('s', 'e', 'n', 't', 'e', 'n', 'c', 'e', '\ue000'), ('b', 'a', 'r', '\ue000'), ('a', 'n', 'd', '\ue000'), ('is', '\ue000'), ('f', 'o', 'o', '\ue000'), ('f', 'i', 'r', 's', 't', '\ue000'), ('a', '\ue000'), ('.', '\ue000')]

I've tried this:

>>> cin
[('t', 'h', 'i', 's', '\ue000'), ('c', 'o', 'r', 'p', 'u', 's', '\ue000'), ('i', 'n', '\ue000'), ('t', 'x', 't', 'f', 'i', 'l', 'e', '\ue000'), ('t', 'h', 'e', '\ue000'), ('s', 'e', 'n', 't', 'e', 'n', 'c', 'e', '\ue000'), ('b', 'a', 'r', '\ue000'), ('a', 'n', 'd', '\ue000'), ('i', 's', '\ue000'), ('f', 'o', 'o', '\ue000'), ('f', 'i', 'r', 's', 't', '\ue000'), ('a', '\ue000'), ('.', '\ue000')]
>>> [tuple(' '.join(i).replace(' '.join(qtuple), ''.join(qtuple)).split()) for i in cin]
[('t', 'h', 'is', '\ue000'), ('c', 'o', 'r', 'p', 'u', 's', '\ue000'), ('i', 'n', '\ue000'), ('t', 'x', 't', 'f', 'i', 'l', 'e', '\ue000'), ('t', 'h', 'e', '\ue000'), ('s', 'e', 'n', 't', 'e', 'n', 'c', 'e', '\ue000'), ('b', 'a', 'r', '\ue000'), ('a', 'n', 'd', '\ue000'), ('is', '\ue000'), ('f', 'o', 'o', '\ue000'), ('f', 'i', 'r', 's', 't', '\ue000'), ('a', '\ue000'), ('.', '\ue000')]

but is there a more efficient way than looping through each word, then changing them to string to do a replace and splitting them again and then casting them back into tuples?

Would regex replacement be faster? Is there a way to work with the list of tuples without dealing with strings?


I've tried this and it seems like replacing the string with str.replace is not the problem. It's really counting the bigrams and extracting them:

import io
from collections import Counter

import time

infile = 'big.txt' # comes from norvig.com/big.txt

n = 2
with io.open(infile, 'r', encoding='utf8') as fin:
    text = fin.read().lower().replace(u' ', u"\uE000")
    for j in range(1,6400):
        unused_char = unichr(ord(u'\uE001') + j)

        start = time.time()
        char_bigrams = zip(*[text[i:] for i in range(n)])
        bigram_time = time.time() - start

        start = time.time()
        most_freq_bigram = Counter(filter(lambda x: u"\uE000" not in x and '\n' not in x, char_bigrams)).most_common(1)[0][0]
        max_time = time.time() - start

        start = time.time()
        text = text.replace(''.join(most_freq_bigram), unused_char)
        replace_time = time.time() - start
        print j, ''.join(most_freq_bigram), most_freq_bigram, bigram_time, max_time, replace_time
    print text

This is tested on norvig.com/big.txt

[out]:

1 th (u't', u'h') 0.896255016327 3.28389787674 0.0253069400787
2 e (u'\ue002', u'e') 1.47053217888 3.16544914246 0.0280749797821
3 in (u'i', u'n') 1.13404297829 3.10529899597 0.0245559215546
4 an (u'a', u'n') 1.20013689995 3.63801002502 0.0242891311646
5 er (u'e', u'r') 1.41387891769 3.13376092911 0.0237591266632
6 on (u'o', u'n') 1.22826981544 3.06997895241 0.0227301120758
7 re (u'r', u'e') 1.21916294098 2.97599196434 0.0238041877747
8 at (u'a', u't') 1.14608097076 2.97988891602 0.0226521492004
9 en (u'e', u'n') 1.20747494698 2.88649988174 0.019054889679
10 ed (u'e', u'd') 1.16296696663 2.8995718956 0.0198271274567
11 is (u'i', u's') 1.17692494392 3.02292394638 0.0228500366211
12 d (u'\ue005', u'd') 1.13779211044 2.85169506073 0.0229239463806

I've already experimented with scikit-learn CountVectorizer and i didn't seem to be as fast as using zip, see Fast/Optimize N-gram implementations in python

Also, without them filter operation in the Counter step, it took even longer. The Counter operation is taking 3 seconds per iteration =(

How else can this operation be optimized?

Counter(filter(lambda x: u"\uE000" not in x and '\n' not in x, char_bigrams)).most_common(1)[0][0]

Solution

  • If you keep your string tuple to length 2 you could use reduce like this:

    def cons_2(word_list, t):
        j = ''.join(t)
        f = lambda acc, e: acc[:-1] + (j,) if (acc[-1] == t[0] and e == t[1]) else acc + (e,)
        return [reduce(f, i[1:], (i[0],)) for i in word_list]
    
    print cons_2(cin, ('i', 's'))
    

    No replacing is involved, f is applied for every element i, the value of cin is not altered instead a new array is made and returned.

    Details:

    For string tuples > 2 the idea is the same but we have to manage the tuple's length l.

    def cons_n(word_list, t):
        l = len(t)
        j = ''.join(t)
        f = lambda acc, e: acc[:-l] + (j, e,) if acc[-l:] == t or acc[:l] == t else acc + (e,)
        return [reduce(f, i[l:], (i[:l])) for i in word_list]
    
    print cons_n(cin, ('i', 's'))
    

    This should work with n-length string tuples.

    Details:

    This is a functional approach no data is modified but generated, so should be safe to have multiple processes at the same time (in theory, I'm no expert about the Python interpreter).

    If the code above is too weird for people not into functional programming this is another approach:

    def cons_n_iter(tuple_list, tuple_seq):
         jnt = ''.join(tuple_seq)
         lnt = len(tuple_seq)
         res = []
         for word in tuple_list:
             acc = (word[:lnt])
             for letter in word[lnt:]:
                 if acc[-lnt:] == tuple_seq or acc[:lnt] == tuple_seq:
                     acc = acc[:-lnt] + (jnt, letter,)
                 else:
                     acc += (letter,)
             res += (acc,)
         return res
    
    print cons_n_iter(cin, ('i', 's'))
    

    The logic is the same as the functional approach, same use of the accumulator. In this case the res accumulator is explicit because in the examples above reduce was taking care of it.