I have created table as follows:
create table emp (
> eid int,
> fname string,
> lname string,
> salary double,
> city string,
> dept string )
> row format delimited fields terminated by ',';
then to enable partitioning i have set following properties:
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
i created partition table as follows:
create table part_emp (
> eid int,
> fname string,
> lname string,
> salary double,
> dept string )
> partitioned by ( city string )
> row format delimited fields terminated by ',';
After creating table i issued insert query as
insert into table part_emp partition(city)
select eid,fname,lname,salary,dept,city from emp;
But it not works..
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = max_20180311015337_5a67813d-dcc5-46c0-ac4b-a54c11ffb912
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1520757649534_0004, Tracking URL = http://ubuntu:8088/proxy/application_1520757649534_0004/
Kill Command = /home/max/bigdata/hadoop-3.0.0/bin/hadoop job -kill job_1520757649534_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2018-03-11 01:53:44,996 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1520757649534_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Same Successfully Works on Hive 1.x
I have the same problem, and set hive.exec.max.dynamic.partitions.pernode=1000;
(default 100) solves my problem. You may try.
PS:This setting means:Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.