pythonapache-sparkjoinpysparkrdd

pyspark - Join two RDDs - Missing third column


I'm very new at Pyspark please take in consideration :)

Basically I've this two textfiles:

file1:

  1,9,5
  2,7,4
  3,8,3

file2:

 1,g,h
 2,1,j
 3,k,i

And the Python code:

file1 = sc.textFile("/user/cloudera/training/file1.txt").map(lambda line: line.split(","))

file2 = sc.textFile("/user/cloudera/training/file2.txt").map(lambda line: line.split(","))

Now doing this join:

join_file = file1.join(file2)

I was hoping to get this:

(1,(9,5),(g,h))
(2,(7,4),(i,j))
(3,(8,3),(k,1))

However, I am getting a different result:

(1, (9,g))
(3, (8,k))
(2, (7,1))

Am I missing any parameter on Join?

Thanks!


Solution

  • This should do the trick:

    file1 = sc.textFile("/FileStore/tables/f1.txt").map(lambda line: line.split(",")).map(lambda x: (x[0], list(x[1:])))
    file2 = sc.textFile("/FileStore/tables/f2.txt").map(lambda line: line.split(",")).map(lambda x: (x[0], list(x[1:])))
    join_file = file1.join(file2)
    join_file.collect()
    

    returns with Unicode u':

    Out[3]: 
    [(u'2', ([u'7', u'4'], [u'1', u'j'])),
     (u'1', ([u'9', u'5'], [u'g', u'h'])),
     (u'3', ([u'8', u'3'], [u'k', u'i']))]