fiwarefiware-cosmos

Error at defining a specific field in a Hive Query


I have an Orion Context Broker connected with Cosmos by cygnus.

It works ok, I mean I send new elements to Context Broker and cygnus send them to Cosmos and save them in files.

The problem I have is when I try to do some searchs.

I start hive, and I see that there are some tables created related with the files that cosmos have created, so I launch some querys.

The simple one works fine:

select * from Table_name;

Hive doesn't launch any mapReduce jobs.

but when I want to filter, join, count, or get only some fields. That is what happens:

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = JOB_NAME, Tracking URL = JOB_DETAILS_URL
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -kill JOB_NAME
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-07-08 14:35:12,723 Stage-1 map = 0%,  reduce = 0%
2015-07-08 14:35:38,943 Stage-1 map = 100%,  reduce = 100%
Ended Job = JOB_NAME with errors
Error during job, obtaining debugging information...
Examining task ID: TASK_NAME (and more) from job JOB_NAME

Task with the most failures(4): 
-----
Task ID:
  task_201409031055_6337_m_000000

URL: TASK_DETAIL_URL
-----

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL

I have found that the files that are created by Cygnus has difference between tha other files, because in the case of the cygnus, they have to be deserialize with a jar.

So, I have the doubt if in those cases I have to apply any MapReduce method or if is already any general method to do this.


Solution

  • Before executing any Hive sentence, do this:

    hive> add jar /usr/local/hive-0.9.0-shark-0.8.0-bin/lib/json-serde-1.1.9.3-SNAPSHOT.jar;
    

    If you are using Hive through JDBC, execute is as any other sentence:

    Connection con = ...
    Statement stmt = con.createStatement();
    stmt.executeQuery("add jar /usr/local/hive-0.9.0-shark-0.8.0-bin/lib/json-serde-1.1.9.3-SNAPSHOT.jar");
    stmt.close();
    stmt = con.createStatement();
    ResultSet rs = stmt.executeQuery("select ...");