I'm trying to generate a JSON string to store a variable number of history records in a single STRING column. The code works on all of my small tests, but fails (no error, just no data) when run on the actual data. Here's what I have:
class HistoryDetail (
var date : String,
var val1 : Int,
var val2 : Int,
var changeCode : String
)
class HistoryHeader(
var numDetailRecords : Int,
var calcDate : String,
var historyRecords : List[HistoryDetail]
)
def getJSON = (val1:Int, val2:Int) => {
implicit val formats = org.json4s.DefaultFormats;
val today = LocalDate.now
val hdl = List(new HistoryDetail(today.toString, val1, val2, "D"))
val hh:HistoryHeader = new HistoryHeader(1,today.toString,hdl)
Serialization.write(hh);
}
Very simple test calling the Scala function works fine:
val strJson = getJSON(1000,1000)
strJson: String = {"numDetailRecords":1,"calcDate":"2018-04-23","historyRecords":[{"date":"2018-04-23","val1":1000,"val2":1000,"changeCode":"D"}]}
Creating the UDF and applying to a small DataFrame works fine:
spark.udf.register("getJSONUDF", getJSON)
val smallDF = Seq((100, 100), (101, 101), (102, 102)).toDF("int_col1", "int_col2").withColumn("json_col",callUDF("getJSONUDF", $"int_col1", $"int_col2"))
smallDF.show(false)
+--------+--------+------------------------------------------------------------------------------------------------------------------------------+
|int_col1|int_col2|json_col |
+--------+--------+------------------------------------------------------------------------------------------------------------------------------+
|100 |100 |{"numDetailRecords":1,"calcDate":"2018-04-23","historyRecords":[{"date":"2018-04-23","val1":100,"val2":100,"changeCode":"D"}]}|
|101 |101 |{"numDetailRecords":1,"calcDate":"2018-04-23","historyRecords":[{"date":"2018-04-23","val1":101,"val2":101,"changeCode":"D"}]}|
|102 |102 |{"numDetailRecords":1,"calcDate":"2018-04-23","historyRecords":[{"date":"2018-04-23","val1":102,"val2":102,"changeCode":"D"}]}|
+--------+--------+------------------------------------------------------------------------------------------------------------------------------+
Running it against the actual data fails (again no error, just no data):
val bigDF = spark.read.table("table_name")
.select($"int_col1",$"int_col2")
.withColumn("json_col",callUDF("getJSONUDF", $"int_col1", $"int_col2"))
bigDF.show(false)
+--------+--------+--------+
|int_col1|int_col2|json_col|
+--------+--------+--------+
|18995 |12702 |{} |
|14989 |46998 |{} |
|25588 |25051 |{} |
|18750 |52282 |{} |
|19963 |25745 |{} |
|17500 |21587 |{} |
|21999 |20379 |{} |
|25975 |5988 |{} |
|26382 |5988 |{} |
|7049 |101907 |{} |
|45997 |47472 |{} |
|45997 |47472 |{} |
|13950 |158957 |{} |
|18999 |123689 |{} |
|33842 |69623 |{} |
|64000 |13362 |{} |
|64000 |13362 |{} |
|64000 |13362 |{} |
|64000 |13362 |{} |
|64000 |13362 |{} |
+--------+--------+--------+
only showing top 20 rows
(Versions: java 1.8.0_60, spark 2.2.0, scala 2.11.8)
Any thoughts on why I am getting an empty JSON object when using the larger DataFrame?
TBH I don't know what exactly goes wrong because I would have expected a Task not serializable
error at some point. My best guess would be that the getJSON
is not thread safe and keeps a centralized state somewhere of the to be encoded object. Again this may be completely wrong
I think your approach could be better though, by using the to_json
function from Spark.
def getHistoryReader = udf((val1:Int, val2:Int) => {
val today = LocalDate.now
val hdl = List(new HistoryDetail(today.toString, val1, val2, "D"))
new HistoryHeader(1,today.toString,hdl)
})
val bigDF = spark.read.table("table_name")
.select($"int_col1",$"int_col2")
.withColumn("json_col",to_json(getHistoryReader($"int_col1", $"int_col2")))
bigDF.show(false)
Looks cleaner in my opinion and leaves the json serialization to Spark. This should work