I need to know if there's a faster way to get the LBP and the resulting histograms of the MNIST dataset. This will be used for handwritten text recognition, through a model that I haven't decided yet..
I've loaded the MNIST dataset and split it to its x, y training sets and x, y test sets based on tensorflow
tutorials.
I've then used cv2
to invert the images.
From there I've defined a function using skimage
to get the LBP and the corresponding histogram of an input image
I finally used a classic for
loop to iterate through the images, get their histograms, store these in a separate list, and return the new list and the unaltered label list of both training and test sets.
Here is the function to load the MNIST dataset:
def loadDataset():
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# should I invert it or not?
x_train = cv2.bitwise_not(x_train)
x_test = cv2.bitwise_not(x_test)
return (x_train, y_train), (x_test, y_test)
Here is the function to get the LBP and the corresponding histogram:
def getLocalBinaryPattern(img, points, radius):
lbp = feature.local_binary_pattern(img, points, radius, method="uniform")
hist, _ = np.histogram(lbp.ravel(),
bins=np.arange(0, points + 3),
range=(0, points + 2))
return lbp, hist
And lastly here's the function to iterate over the images:
def formatDataset(dataset):
(x_train, y_train), (x_test, y_test) = dataset
x_train_hst = []
for i in range(len(x_train)):
_, hst = getLocalBinaryPattern(x_train[i], 8, 1)
print("Computing LBP for training set: {}/{}".format(i, len(x_train)))
x_train_hst.append(hst)
print("Done computing LBP for training set!")
x_test_hst=[]
for i in range(len(x_test)):
_, hst = getLocalBinaryPattern(x_test[i], 8, 1)
print("Computing LBP for test set: {}/{}".format(i, len(x_test)))
x_test_hst.append(hst)
print("Done computing LBP for test set!")
print("Done!")
return (x_train_hst, y_train), (x_test_hst, y_test)
I know it'll be slow, and indeed, it is slow. So I'm kind of looking for more ways to speed it up or if there is already a version of the dataset that has this info I needed.
I don't think there's a straightforward way to speed up the iteration over the images. One might expect that using NumPy's vectorize
or apply_along_axis
would improve performance, but these solutions are actually slower than a for
loop (or a list comprehension).
Different alternatives for iterating through the images:
def compr(imgs):
hists = [getLocalBinaryPattern(img, 8, 1)[1] for img in imgs]
return hists
def vect(imgs):
lbp81riu2 = lambda img: getLocalBinaryPattern(img, 8, 1)[1]
vec_lbp81riu2 = np.vectorize(lbp81riu2, signature='(m,n)->(k)')
hists = vec_lbp81riu2(imgs)
return hists
def app(imgs):
lbp81riu2 = lambda img: getLocalBinaryPattern(img.reshape(28, 28), 8, 1)[1]
pixels = np.reshape(imgs, (len(imgs), -1))
hists = np.apply_along_axis(lbp81riu2, 1, pixels)
return hists
Results:
In [112]: (x_train, y_train), (x_test, y_test) = loadDataset()
In [113]: %timeit -r 3 compr(x_train)
1 loop, best of 3: 14.2 s per loop
In [114]: %timeit -r 3 vect(x_train)
1 loop, best of 3: 17.1 s per loop
In [115]: %timeit -r 3 app(x_train)
1 loop, best of 3: 14.3 s per loop
In [116]: np.array_equal(compr(x_train), vect(x_train))
Out[116]: True
In [117]: np.array_equal(compr(x_train), app(x_train))
Out[117]: True