apache-sparkavrospark-avrospark-shellapache-hudi

Apache Hudi example from spark-shell throws error for Spark 2.3.0


I am trying to run this example (https://hudi.apache.org/docs/quick-start-guide.html) using spark-shell. The Apache Hudi documentation says "Hudi works with Spark-2.x versions" The environment details are:

Platform: HDP 2.6.5.0-292 Spark version: 2.3.0.2.6.5.279-2 Scala version: 2.11.8

I am using the below spark-shell command (N.B. - The spark-avro version doesn't exactly match since I could not find the respective spark-avro dependency for Spark 2.3.2)

spark-shell \
--packages org.apache.hudi:hudi-spark-bundle_2.11:0.6.0,org.apache.spark:spark-avro_2.11:2.4.4,org.apache.avro:avro:1.8.2 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'

When I try to write the data I get the below error:

scala> df.write.format("hudi").
     |   options(getQuickstartWriteConfigs).
     |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |   option(TABLE_NAME, tableName).
     |   mode(Overwrite).
     |   save(basePath)
20/12/27 06:21:15 WARN HoodieSparkSqlWriter$: hoodie table at file:/u/users/j0s0j7j/tmp/hudi_trips_cow already exists. Deleting existing data & overwriting with new data.
java.lang.NoSuchMethodError: org.apache.avro.Schema.createUnion([Lorg/apache/avro/Schema;)Lorg/apache/avro/Schema;
  at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:185)
  at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:176)
  at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:174)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
  at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:174)
  at org.apache.hudi.AvroConversionUtils$.convertStructTypeToAvroSchema(AvroConversionUtils.scala:77)
  at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:132)
  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
  at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
  at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
  ... 68 elided

To me it looks the correct avro version is not added to the classpath or picked up. Can anyone please suggest a work-around? I am stuck at this for quite sometime now.


Solution

  • This issue was because the avro jar was being referenced from spark2/jars/avro-1.7.7.jar. And this was causing the error.

    I had to use --jars, spark.driver.extraClassPath and spark.executor.extraClassPath options to specify the .ivy2/jars location to override the default avro jar.

    Spark Shell Command:

    spark-shell \
     --packages org.apache.hudi:hudi-spark-bundle_2.11:0.6.0,org.apache.spark:spark-avro_2.11:2.4.4,org.apache.avro:avro:1.8.2 \\
     --jars $HOME/.ivy2/jars/org.apache.avro_avro-1.8.2.jar \
     --conf spark.driver.extraClassPath=$HOME/.ivy2/jars/org.apache.avro_avro-1.8.2.jar \
     --conf spark.executor.extraClassPath=$HOME/.ivy2/jars/org.apache.avro_avro-1.8.2.jar \
     --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
    

    Used the below code snippet to print the classpath of spark-shell

    import java.lang.ClassLoader
    val cl = ClassLoader.getSystemClassLoader
    cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
    

    Check if indeed the avro file is picked up from the extraClassPath option

    sc.getClass().getResource("/org/apache/avro/generic/GenericData.class")
    res3: java.net.URL = jar:file:/users/joyan/.ivy2/jars/org.apache.avro_avro-1.8.2.jar!/org/apache/avro/generic/GenericData.class