i am trying to output some specific records in reduce part, which depend on the value of key-value records. in hadoop mapreduce can use the code like
public void setup(Context context) throws IOException, InterruptedException {
super.setup(context);
Configuration conf = context.getConfiguration ();
FileSystem fs = FileSystem.get (conf);
int taskID = context.getTaskAttemptID().getTaskID().getId();
hdfsOutWriter = fs.create (new Path (fileName + taskID), true); // FSDataOutputStream
}
public void reduce(Text key, Iterable<Text> value, Context context) throws IOException, InterruptedException {
boolean isSpecificRecord = false;
ArrayList <String> valueList = new ArrayList <String> ();
for (Text val : value) {
String element = val.toString ();
if (filterFunction (element)) return;
if (specificFunction (element)) isSpecificRecord = true;
valueList.add (element);
}
String returnValue = anyFunction (valueList);
String specificInfo = anyFunction2 (valueList);
if (isSpecificRecord) hdfsOutWriter.writeBytes (key.toString () + "\t" + specificInfo);
context.write (key, new Text (returnValue));
}
i want to run this process on spark cluster, could spark java api do this like above code?
Just an idea how to simulate:
yoursRDD.mapPartitions(iter => {
val fs = FileSystem.get(new Configuration())
val ds = fs.create(new Path("outfileName_" + TaskContext.get.partitionId))
ds.writeBytes("Put yours results")
ds.close()
iter
})