javahadoopjdbcjarapache-phoenix

Unable to connect to Phoenix using JDBC


I have a Hadoop Cluster set up with HBase and Phoenix and I'm trying to connect to Phoenix using JDBC, but I am sort of unable to get a successful connection.

I want to use JDBC to connect using Python 3.x but as for simple test purposes I set up a connection using Java in Eclipse.

I was originally using a 3rd party library (phoenixdb) for Python, but I started getting timeouts using this library (as my database grows). After this I changed some variables in the settings of my hbase-site.xml, to avoid timeout, but for some reason that didn't solve my problems using this 3rd party library.

So I tried to move on to JDBC and a Java project instead - at least for the testing.

I have the following Java code:

public class PhoenixTest {

    static final String JDBC_DRIVER = "org.apache.phoenix.jdbc.PhoenixDriver";
    static final String DB_URL1 = "jdbc:phoenix:https://xx.xx.xx.xx:8765/";
    static final String DB_URL2 = "jdbc:phoenix:xx.xx.xx.xx:8765/";

    public static void main(String[] args) {
        Connection conn = null;
        Statement st = null;

        try {
            Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");

            System.out.println("Connecting to database..");

            conn = DriverManager.getConnection(DB_URL);
            st = conn.createStatement();

            st.close();
            conn.close();

        } catch (SQLException se) {
            se.printStackTrace();
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
                if (st != null)
                    st.close();
            } catch (SQLException se2) {
            }
            try {
                if (conn != null)
                    conn.close();
            } catch (SQLException se) {
                se.printStackTrace();
            }
        }
    }
    System.out.println("Finished!");
}

And with this piece of code and the client.jar it seems that the settings I changed are being recognized as the queries are taking 10 minutes (600.000 ms - which is what I changed it to from the default 60.000 ms) before the connection is closed unlike with the use of the phoenixdb library in which the connection is closed after using the default timeout (60.000 ms).

When I use DB_URL2 I get the following error:

java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Apr 06 11:13:35 CEST 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=80236: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hadoopslave,16020,1490706604328, seqNum=0

Which doesn't match my timeout time of 600.000 ms. However when using DB_URL1 it seems to use the client.jar and the timeout time that I have set inside of the .jar (hbase-default.xml), which I have set to 300.000 ms, just to test which one is used. Using DB_URL1 I get the following error:

java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations

Which I assume means that it can't find the IP.

However I never completely get a successful connection so I was hoping someone would have some suggestions to what is going wrong?


Solution

  • Thanks to @PaulBastide I managed to get a connection using the following link:

    jdbc:phoenix:thin:url=http://<query_server>:<port>;serializa‌​tion=PROTOBUF
    

    I also used the thin-client found together with the Phoenix distribution.