apache-sparkelasticsearch-hadoop

Upgrading to Spark 2.0 dataframe.map


I'm updating some Spark 1.6 code to 2.0.1 and I'm running into some issues using map.

I see other questions on SO questions like encoder-error-while-trying-to-map-dataframe-row-to-updated-row but I have not been able to get these techniques to work and they seem ridiculous for this scenario below.

val df = spark.sqlContext.read.parquet(inputFile)
df: org.apache.spark.sql.DataFrame = [device_id: string, hour: string ... 9 more fields]

val deviceAggDF = df.select("device_id").distinct
deviceAggDF: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [device_id: string]

deviceAggDF.map( x =>
  (
    Map("ID" -> x.getAs[String](0)),
    Map()
  )
)
scala.MatchError: Nothing (of class scala.reflect.internal.Types$ClassNoArgsTypeRef)
  at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:667)
  at org.apache.spark.sql.catalyst.ScalaReflection$.toCatalystArray$1(ScalaReflection.scala:448)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:482)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:592)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:583)
  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
  at scala.collection.immutable.List.foreach(List.scala:381)
  at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
  at scala.collection.immutable.List.flatMap(List.scala:344)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:583)
  at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:425)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:61)
  at org.apache.spark.sql.Encoders$.product(Encoders.scala:274)
  at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:47)

Solution

  • To return empty Map you should specify type that can be ecnoded, for example:

    deviceAggDF.map( x =>
      (
        Map("ID" -> x.getAs[String](0)),
        Map[String, String]()
      )
    )
    

    Map() is Map[Nothing,Nothing] and cannot be used in Dataset.