scalaapache-sparkhivepysparkspark-hive

Spark hive udf: no handler for UDAF analysis exception


Created one project 'spark-udf' & written hive udf as below:

package com.spark.udf
import org.apache.hadoop.hive.ql.exec.UDF

class UpperCase extends UDF with Serializable {
  def evaluate(input: String): String = {
    input.toUpperCase
  }

Built it & created jar for it. Tried to use this udf in another spark program:

spark.sql("CREATE OR REPLACE FUNCTION uppercase AS 'com.spark.udf.UpperCase' USING JAR '/home/swapnil/spark-udf/target/spark-udf-1.0.jar'")

But following line is giving me exception:

spark.sql("select uppercase(Car) as NAME from cars").show

Exception:

Exception in thread "main" org.apache.spark.sql.AnalysisException: No handler for UDAF 'com.spark.udf.UpperCase'. Use sparkSession.udf.register(...) instead.; line 1 pos 7 at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeFunctionExpression(SessionCatalog.scala:1105) at org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085) at org.apache.spark.sql.catalyst.catalog.SessionCatalog$$anonfun$org$apache$spark$sql$catalyst$catalog$SessionCatalog$$makeFunctionBuilder$1.apply(SessionCatalog.scala:1085) at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:115) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupFunction(SessionCatalog.scala:1247) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$6$$anonfun$applyOrElse$52.apply(Analyzer.scala:1226) at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)

Any help around this is really appreciated.


Solution

  • As mentioned in comments, it's better to write Spark UDF:

    val uppercaseUDF = spark.udf.register("uppercase", (s : String) => s.toUpperCase)
    spark.sql("select uppercase(Car) as NAME from cars").show
    

    Main cause is that you didn't set enableHiveSupport during creation of SparkSession. In such situation, default SessionCatalog will be used and makeFunctionExpression function in SessionCatalog scans only for User Defined Aggregate Function. If function is not an UDAF, it won't be found.

    Created Jira task to implement this