rcollaborative-filteringrecommendation-engine

Is testing collaborative filtering technique on randomly generated user-item rating matrix meaningful?


I know that some data sets are available to run collaborative filtering algorithms such as user-based or item-based filtering. However I need to test an algorithm on many data sets to prove that my proposed methodology performs better. I generated random user-item rating matrices with values from 1 to 5. I consider the generated matrices as ground truth. Then I remove some of the ratings in the matrix and using my algorithm I predict missing ratings. Finally I use RMSE measure to compare ground truth matrix and the matrix I get as an output from my algorithm. Do this methodology seems meaningful or not ?


Solution

  • No not really.

    You are missing non-uniform / real-world distributions. Every recommendation-system is build on assumptions or it can't beat random-guessing. (Keep in mind, that this is not only about the distribution of the rating; but also about which items are rated -> a lot of theoretical research showing different assumptions: e.g. uniform vs. something else; mostly in convex MF with nuclear-norm vs. max-norm and co.)

    Better pick those available datasets and if needed, sub-sample those without destroying every kind of correlation. E.g. filtering by some attribute like A: all ratings with some movie <= 1990; all ratings > 1990. Yes, this will shift the underlying distributions, but it sounds something like that is what you want. IF not you can always sub-sample uniformly, but that's more for some generalization-evaluation (small vs. big datasets).