apache-sparkhiveapache-spark-sqlspark-thriftserver

How to connect to remote hive server from spark


I'm running spark locally and want to to access Hive tables, which are located in the remote Hadoop cluster.

I'm able to access the hive tables by lauching beeline under SPARK_HOME

[ml@master spark-2.0.0]$./bin/beeline 
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://remote_hive:10000
Connecting to jdbc:hive2://remote_hive:10000
Enter username for jdbc:hive2://remote_hive:10000: root
Enter password for jdbc:hive2://remote_hive:10000: ******
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/ml/spark/spark-2.0.0/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/10/12 19:06:39 INFO jdbc.Utils: Supplied authorities: remote_hive:10000
16/10/12 19:06:39 INFO jdbc.Utils: Resolved authority: remote_hive:10000
16/10/12 19:06:39 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://remote_hive:10000
Connected to: Apache Hive (version 1.2.1000.2.4.2.0-258)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://remote_hive:10000>

how can I access the remote hive tables programmatically from spark?


Solution

  • JDBC is not required

    Spark connects directly to the Hive metastore, not through HiveServer2. To configure this,

    1. Put hive-site.xml on your classpath, and specify hive.metastore.uris to where your hive metastore hosted. Also see How to connect to a Hive metastore programmatically in SparkSQL?

    2. Import org.apache.spark.sql.hive.HiveContext, as it can perform SQL query over Hive tables.

    3. Define val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

    4. Verify sqlContext.sql("show tables") to see if it works

    SparkSQL on Hive tables

    Conclusion : If you must go with jdbc way

    Have a look connecting apache spark with apache hive remotely.

    Please note that beeline also connects through jdbc. from your log it self its evident.

    [ml@master spark-2.0.0]$./bin/beeline Beeline version 1.2.1.spark2 by Apache Hive beeline> !connect jdbc:hive2://remote_hive:10000

    Connecting to jdbc:hive2://remote_hive:10000

    So please have a look at this interesting article

    Currently HiveServer2 driver doesn't allow us to use "Sparkling" Method 1 and 2, we can rely only on Method 3

    Below is example code snippet though which it can be achieved

    Loading data from one Hadoop cluster (aka "remote") into another one (where my Spark lives aka "domestic") thru HiveServer2 JDBC connection.

    import java.sql.Timestamp
    import scala.collection.mutable.MutableList
    
    case class StatsRec (
      first_name: String,
      last_name: String,
      action_dtm: Timestamp,
      size: Long,
      size_p: Long,
      size_d: Long
    )
    
    val conn: Connection = DriverManager.getConnection(url, user, password)
    val res: ResultSet = conn.createStatement
                       .executeQuery("SELECT * FROM stats_201512301914")
    val fetchedRes = MutableList[StatsRec]()
    while(res.next()) {
      var rec = StatsRec(res.getString("first_name"), 
         res.getString("last_name"), 
         Timestamp.valueOf(res.getString("action_dtm")), 
         res.getLong("size"), 
         res.getLong("size_p"), 
         res.getLong("size_d"))
      fetchedRes += rec
    }
    conn.close()
    val rddStatsDelta = sc.parallelize(fetchedRes)
    rddStatsDelta.cache()
    
    
    
    
     // Basically we are done. To check loaded data:
    
    println(rddStatsDelta.count)
    rddStatsDelta.collect.take(10).foreach(println)