scalaapache-sparkrddflatmaplistbuffer

scala rdd flatmap to generate multiple row from one row to en-fill gap of rows issue


I am trying to resolve an issue where Lets say a person has borrowed money from some one and then we have all the transaction of returning that money in installment with no interest. Here I want to enfill not paid rows with same amount as previous amount

Input

name,date_of_borrow/return,Amount-Principal
Ashish,2018-03-01,20000
Ashish,2018-04-01,19000
Ashish,2018-05-01,18000
Ashish,2018-06-01,17000
Ashish,2018-07-01,16000
Ashish,2018-08-01,15000
Ashish,2018-12-01,14000
Ashish,2019-02-01,13000

Expected Output

name,date_of_borrow/return,Amount-principal
Ashish,2018-03-01,20000
Ashish,2018-04-01,19000
Ashish,2018-05-01,18000
Ashish,2018-06-01,17000
Ashish,2018-07-01,16000
Ashish,2018-08-01,15000
****Ashish,2018-09-01,15000**  --- copy previous amount as installment not paid
**Ashish,2018-10-01,15000**
**Ashish,2018-11-01,15000****
Ashish,2018-12-01,14000
**Ashish,2018-01-01,14000**
Ashish,2019-02-01,13000

I want to write it in Scala RDD

val tr = spark.sparkContext.textFile("/tmp/data.txt")
val tr.map(x=>x.split(',')).map(x=>(x(0),(x(1),x(2)))).collect()
val sm= tr.map(x=>(x.split(',')(0),(x))).groupByKey().flatMap(rec=>{rec._2.toList.sortBy(x=>(-x.split(",")(2).toFloat)).zipWithIndex})
val part1 = sm.map(x=>((x._1.split(',')(0),x._2.toInt),(x._1.split(',')(1),x._1.split(',')(2))))
val part2 = sm.map(x=>((x._1.split(',')(0),x._2.toInt-1),(x._1.split(',')(1),x._1.split(',')(2))))
val data = part1.leftOuterJoin(part2).sortByKey()

///Reading the data and joining with next row for a name on basis of index

val oo = data.map(x=>(x._1._1,x._2._1._1,x._2._1._2,x._2._2.getOrElse(x._2._1._1,0)))
val rr  = oo.map(x=>(x._1,x._2,x._3,x._4._1))  or     val oo = data.map(x=>(x._1._1,x._2._1._1,x._2._1._2,x._2._2.getOrElse(x._2._1._1,0)._1))

///Mapping to get the final data as

scala> oo.filter(x=>x._1=="Ashish").collect().foreach(println)
(Ashish,2018-03-01,20000,2018-04-01)
(Ashish,2018-04-01,19000,2018-05-01)
(Ashish,2018-05-01,18000,2018-06-01)
(Ashish,2018-06-01,17000,2018-07-01)
(Ashish,2018-07-01,16000,2018-08-01)
(Ashish,2018-08-01,15000,2018-12-01)
(Ashish,2018-12-01,14000,2019-02-01)
(Ashish,2019-02-01,13000,2019-02-01)

Now rest of task to find date diff and generate flatMap rows

val format = new java.text.SimpleDateFormat("yyyy-MM-dd")
format.format(new java.util.Date())  --test date
def generateDates(startdate: Date, enddate: Date): ListBuffer[String] ={
var dateList = new ListBuffer[String]()
var calendar = new GregorianCalendar()
calendar.setTime(startdate)
while (calendar.getTime().before(enddate)) {
dateList += (calendar.get(Calendar.YEAR)) + "-" + (calendar.get(Calendar.MONTH)+1) + "-" +  (calendar.get(Calendar.DAY_OF_MONTH)) 
calendar.add(Calendar.MONTH, 1)
}
if (dateList.isEmpty) { 
  dateList+= (calendar.get(Calendar.YEAR)) + "-" + (calendar.get(Calendar.MONTH)+1) + "-" +  (calendar.get(Calendar.DAY_OF_MONTH))}
println("\n" + dateList + "\n")
dateList
}

Here is where it is going wrong and I am facing difficulty in understanding or resolving it. I am getting one extra row for each last date which should not come

scala> oo.filter(x=>x._1=="Ashish").flatMap(pp=> {
     |     var allDates = new ListBuffer[(String,String,Integer)]()
     |     for (x <- generateDates(format.parse(pp._2),format.parse(pp._4))) {
     |     allDates += ((pp._1, x , pp._3.toInt))}
     |     allDates
     |     }).collect().foreach(println)


(Ashish,Thu Mar 01,1,2,20000)                                                   
(Ashish,Sun Apr 01 00:00:00 IST 2018,20000)   --- unwanted row and I dont know why wrong date format
(Ashish,Sun Apr 01,1,3,19000)
(Ashish,Tue May 01 00:00:00 IST 2018,19000)   --- unwanted row
(Ashish,Tue May 01,1,4,18000)
(Ashish,Fri Jun 01 00:00:00 IST 2018,18000)
(Ashish,Fri Jun 01,1,5,17000)
(Ashish,Sun Jul 01 00:00:00 IST 2018,17000)
(Ashish,Sun Jul 01,1,6,16000)
(Ashish,Wed Aug 01 00:00:00 IST 2018,16000)
(Ashish,Wed Aug 01,1,7,15000)
(Ashish,Sat Sep 01,1,8,15000)
(Ashish,Mon Oct 01,1,9,15000)
(Ashish,Thu Nov 01,1,10,15000)
(Ashish,Sat Dec 01 00:00:00 IST 2018,15000)
(Ashish,Sat Dec 01,1,11,14000)
(Ashish,Tue Jan 01,1,0,14000)
(Ashish,Fri Feb 01 00:00:00 IST 2019,14000)
(Ashish,Fri Feb 01 00:00:00 IST 2019,13000)

I completely agree that this can be bad way of writing code but can some one please help in understanding it. because i need to get it like where am i going wrong and also i need to know best way to do it

I can see that datefunction is working fine.

scala> for(x<-generateDates(format.parse("2018-01-01"),format.parse("2018-11-01")))
     | {
     | println("\n" + x + "\n")
     | }

ListBuffer(2018-1-1, 2018-2-1, 2018-3-1, 2018-4-1, 2018-5-1, 2018-6-1, 2018-7-1, 2018-8-1, 2018-9-1, 2018-10-1)


2018-1-1


2018-2-1


2018-3-1


2018-4-1


2018-5-1


2018-6-1


2018-7-1


2018-8-1


2018-9-1


2018-10-1

Solution

  • I am still working on finding the reason why this code was not working , But I have coded it in a different way which gives me correct result

    import org.apache.spark.{ SparkConf, SparkContext }
    import org.apache.spark.sql.functions.broadcast
    import org.apache.spark.sql.types._
    import org.apache.spark.sql._
    import org.apache.spark.sql.functions._
    import scala.collection.mutable.ListBuffer
    import java.util.{GregorianCalendar, Date}
    import java.util.Calendar
    val ipt = spark.read.format("com.databricks.spark.csv").option("header","true").option("inferchema","true").load("/tmp/data.csv")
    val sm  = ipt.rdd.map(x=>(x(0).toString(),(x.toString().replace("[","").replace("]","")))).groupByKey().flatMap(rec=>{rec._2.toList.sortBy(x=>(-x(2).toFloat)).zipWithIndex})
    val part1 = sm.map(x=>((x._1.split(',')(0),x._2.toInt),(x._1.split(',')(1),x._1.split(',')(2))))
    val part2 = sm.map(x=>((x._1.split(',')(0),x._2.toInt-1),(x._1.split(',')(1),x._1.split(',')(2))))
    val data = part1.leftOuterJoin(part2).sortByKey()
    val oo = data.map(x=>(x._1._1,x._2._1._1,x._2._1._2,x._2._2.getOrElse(x._2._1._1,0)))
    val oo = data.map(x=>(x._1._1,x._2._1._1,x._2._1._2,x._2._2.getOrElse(x._2._1._1,0)._1))
    val format = new java.text.SimpleDateFormat("yyyy-MM-dd")
    format.format(new java.util.Date())  --test date
    def generateDates(startdate: Date, enddate: Date): ListBuffer[String] ={
    var dateList = new ListBuffer[String]()
    var calendar = new GregorianCalendar()
    calendar.setTime(startdate)
    while (calendar.getTime().before(enddate)) {
    dateList += (calendar.get(Calendar.YEAR)) + "-" + (calendar.get(Calendar.MONTH)+1) + "-" +  (calendar.get(Calendar.DAY_OF_MONTH)) 
    calendar.add(Calendar.MONTH, 1)
    }
    if (dateList.isEmpty) { 
      dateList+= (calendar.get(Calendar.YEAR)) + "-" + (calendar.get(Calendar.MONTH)+1) + "-" +  (calendar.get(Calendar.DAY_OF_MONTH))}
    println("\n" + dateList + "\n")
    dateList
    }
    oo.flatMap(pp=> {
    var allDates = new ListBuffer[(String,String,Integer)]()
    for (x <- generateDates(format.parse(pp._2),format.parse(pp._4))) {
    allDates += ((pp._1, x , pp._3.toInt))}
    allDates
    }).collect().foreach(println)
    
    
    
    
    
    (Ashish,2018-3-1,20000)
    (Ashish,2018-4-1,19000)
    (Ashish,2018-5-1,18000)
    (Ashish,2018-6-1,17000)
    (Ashish,2018-7-1,16000)
    (Ashish,2018-8-1,15000)
    (Ashish,2018-9-1,15000)
    (Ashish,2018-10-1,15000)
    (Ashish,2018-11-1,15000)
    (Ashish,2018-12-1,14000)
    (Ashish,2019-1-1,14000)
    (Ashish,2019-2-1,13000)