apache-sparkpysparkapache-spark-sqlapache-drillmapr

Establishing connection to drill using pyspark


I am trying to fetch the data from MapR DB into a dataframe using drill to connect in pyspark shell.

Here is what I do in my pyspark shell :

 `dataframe_mysql = sqlContext.read.format("jdbc").option("url", "jdbc:drill:zk=localhost:5181/drill/demo_mapr_com-drillbits;schema=dfs;").option("driver","org.apache.drill.jdbc.Driver").option("dbtable","select * from dfs.`/DDDE/jsondb/ruleengine/testtransactions").option("user","root").option("password","mapr").load()`

Unfortunately, I get the following error.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/mapr/spark/spark-1.6.3-bin-hadoop2.6/python/pyspark/sql/readwriter.py", line 139, in load
    return self._df(self._jreader.load())
  File "/opt/mapr/spark/spark-1.6.3-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/opt/mapr/spark/spark-1.6.3-bin-hadoop2.6/python/pyspark/sql/utils.py", line 45, in deco
    return f(*a, **kw)
  File "/opt/mapr/spark/spark-1.6.3-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o88.load.
: java.lang.ClassNotFoundException: org.apache.drill.jdbc.Driver
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:45)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:45)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:45)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
        at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745

Any idea where I am going wrong ?

Edit : In sqlline I am able to retrieve the dataframe by following :

!connect jdbc:drill:zk=localhost:31010/drill/demo_mapr_com-drillbits;schema=dfs;
!connect jdbc:drill:zk=localhost:5181/drill/demo_mapr_com-drillbits;schema=dfs;
select * from dfs.`/DDDE/jsondb/ruleengine/testtransactions`;

Solution

  • The Drill JDBC Driver JAR file must exist on a client machine so you can configure the driver for the application or third-party tool that you intend to use. You can get the driver as followed :

    Copy the drill-jdbc-all JAR file from the following Drill installation directory to your working directory and proceed as followed to launch your script :

    ./bin/spark-submit --jars drill-jdbc-all-<version>.jar your_spark_script.py
    

    If you are using pyspark, you should do the following :

    pyspark --jars drill-jdbc-all-<version>.jar
    

    If you can't find the JAR, in your drill directory installation :

    $> tree jars/jdbc-driver/
    jars/jdbc-driver/
     └── drill-jdbc-all-1.10.0.jar # this is it