javamapreduceapache-spark

Apache Spark mapPartitionsWithIndex


Can someone give example of correct usage of mapPartitionsWithIndex in Java? I've found a lot of Scala examples, but there is lack of Java ones. Is my understanding correct that separate partitions will be handled by separate nodes when using this function.

I am getting the following error

method mapPartitionsWithIndex in class JavaRDD<T> cannot be applied to given types;
    JavaRDD<String> rdd = sc.textFile(filename).mapPartitionsWithIndex
    required: Function2<Integer,Iterator<String>,Iterator<R>>,boolean
    found: <anonymous Function2<Integer,Iterator<String>,Iterator<JavaRDD<String>>>>

When doing

JavaRDD<String> rdd = sc.textFile(filename).mapPartitionsWithIndex(
    new Function2<Integer, Iterator<String>, Iterator<JavaRDD<String>> >() {

    @Override
    public Iterator<JavaRDD<String>> call(Integer ind, String s) { 

Solution

  • Here is the code I use to remove the first line of a csv file:

    JavaRDD<String> rawInputRdd = sparkContext.textFile(dataFile);
    
    Function2 removeHeader= new Function2<Integer, Iterator<String>, Iterator<String>>(){
        @Override
        public Iterator<String> call(Integer ind, Iterator<String> iterator) throws Exception {
            if(ind==0 && iterator.hasNext()){
                iterator.next();
                return iterator;
            }else
                return iterator;
        }
    };
    JavaRDD<String> inputRdd = rawInputRdd.mapPartitionsWithIndex(removeHeader, false);