pythonpandasscipyrecommendation-enginematrix-factorization

Recommendation system with matrix factorization for huge data gives MemoryError


I have three DB models (from Django) that can be used as the input for building a recommendation system:

I should still be able to use matrix factorization (MF) for building a recommendation system, even though the rating of a certain item will just be in the form of 1 and 0 (saved or not saved).

In order to use all the MF algorithms found in either scipy or surprise, I have to create a pandas DataFrame and pivot it such that all userIds will be the rows (indexes) and all movieIds will be the columns.

A snippet code for doing this is:

# usersSet and moviesSet contain only ids of users or movies

zeros = numpy.zeros(shape=(len(usersSet), len(moviesSet)), dtype=numpy.int8)

saves_df = pandas.DataFrame(zeros, index=list(usersSet), columns=list(moviesSet))

for save in savesFromDb.iterator(chunk_size=50000):
    userId = save['user__id']
    movieId = save['movie__id']

    saves_df.at[userId, movieId] = 1

Problems so far:

Questions:

  1. Given that there are ~100k users, ~120k movies and ~450k saves, what's the best approach to model this in order to use recommendation algorithms but still not get MemoryError?
  2. I also tried using DataFrame.pivot(), but is there a way to build it from 3 different DataFrames? i.e. indexes will be from list(usersSet), columns from list(moviesList) and values by iterating over savesFromDb and seeing where there is a userId -> movieId relationship and adding 1 in the pivot.
  3. Aside from surprise's rating_scale parameter where you can define the rating (in my case would be (0, 1)), is there any other way in terms of algorithm approach or data model structure to leverage the fact that the rating in my case is only 1 or 0 (saved or not saved)?

Solution

  • If there is an option to use sparse matrices with algorithms that accept them, then I would highly recommend using sparse matrices to get rid of memory issues. scipy.linalg.svds works on scipy sparse matrices.

    This is how you can go about creating sparse matrices for your case:

    Let's say we have 3 users('a', 'b', 'c') and 3 movies ('aa', 'bb', 'cc'). The save history is as follows:

    We need to create a csr_matrix A_sparse, such that users represents rows, movies columns and if a user i has saved movie j, then A[i, j] = 1

    import numpy as np
    from scipy.sparse import csr_matrix
    
    # index users and movies by integers
    user2int = {u:i for i, u in enumerate(np.unique(users))}
    movies2int = {m:i for i, m in enumerate(np.unique(movies))}
    
    # get saved user list and corresponding movie lists
    saved_users = ["a", "b", "c", "a"]
    saved_movies = ["aa", "bb", "cc", "bb"]
    
    # get row and column indices where we need populate 1's
    usersidx = [user2int[u] for u in saved_users]
    moviesidx = [movies2int[m] for m in saved_movies]
    
    # Here, we only use binary flag for data. 1 for all saved instances.
    # Instead, one could also use something like count of saves etc.
    data = np.ones(len(saved_users), ) 
    
    # create csr matrix
    A_sparse = csr_matrix((data, (usersidx, moviesidx)))
    
    print("Sparse array", A_sparse)
    
    #<3x3 sparse matrix of type '<class 'numpy.float64'>'
    #   with 4 stored elements in Compressed Sparse Row format>
    
    print(A_sparse.data.nbytes)
    # 32
    
    print("Dense array", A_sparse.A)
    
    #array([[1., 1., 0.],
    #       [0., 1., 0.],
    #       [0., 0., 1.]])
    
    print(A_sparse.A.nbytes)
    # 72
    
    

    You can note that, since half of our data points are zeros (approximately), sparse matrix size is almost half of numpy ndarray. Thus, memory compression will increase in proportion of percent of zeros in your matrix.