Mahout 0.13.0 / java 8
I am completely new in Mahout and trying to understand how to implement recommendation engine using Mahout. So far i know ..
Mahout provides 3 types of filtering -
To start implementing my first recommendation, i started with Collaborative filtering which is easy to implement without Hadoop.
Collaborative Filtering -
Mahout Interface :
1.DataModel 2. UserSimilarity 3. ItemSimilarity 4. UserNeighborhood 5. Recommender
I understand its component and have written user and item based recommendation using multiple combination of Similarities and neighborhood.
Question :
Can someone please clarify me.
1) Map Reduce was deprecated completely in 0.10.0. The 'new Mahout' is a mathematically expressive Scala DSL that is abstracted away from the engine- e.g. The same Scala code should be able to compile for Flink/Spark/Other Engines. Yes this was based on performance.
2) There hasn't been a lot of work done with the Java API, however I've heard there are some people working on it.
3.) I think you're asking if you could write a Spark recommendation engine in Java. The answer is yes. But really, I mean, I haven't done a lot of porting between scala / Java, but in theory you should be able to just import the Scala functions/classes into your Java code? This Link shows a little more about writing a reccomender from scratch- though it is in Scala, you'd need to port it to Java (if you do that, feel free to open a PR and we'll include it as an example).
4.) Yes, it can. This Link described how to set up Spark with Mahout in Zeppelin, but the principals remain the same for any setup (e.g. which jars you need and what SparkConf
you need to tweak)
iirc, you need mahout-spark, mahout-math, mahout-math-scala. (The spark-dependency-reduced, you only need for using local shell programs, e.g. Zeppelin or the Mahout Spark Shell).
5.) Yes, Mahout is a library that runs on Spark or other distributed engines.