Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.12.1
Description
we have two hive table, the ddl as below
//test_tbl5 create table test.test_5 (dpi int, uid bigint) partitioned by( day string, hour string) stored as parquet; //test_tbl3 create table test.test_3( dpi int, uid bigint, itime timestamp) stored as parquet;
then add a partiton to test_tbl5,
alter table test_tbl5 add partition(day='2021-02-27',hour='12');
we start a flink streaming job to read hive table test_tbl5 , and write the data into test_tbl3, the job's sql as
set test_tbl5.streaming-source.enable = true; insert into hive.test.test_tbl3 select dpi, uid, cast(to_timestamp('2020-08-09 00:00:00') as timestamp(9)) from hive.test.test_tbl5 where `day` = '2021-02-27';
and we seen the exception throws
2021-02-28 22:33:16,553 ERROR org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext - Exception while handling result from async call in SourceCoordinator-Source: HiveSource-test.test_tbl5. Triggering job failover.org.apache.flink.connectors.hive.FlinkHiveException: Failed to enumerate files at org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator.handleNewSplits(ContinuousHiveSplitEnumerator.java:152) ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1] at org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$4(ExecutorNotifier.java:136) ~[flink-dist_2.12-1.12.1.jar:1.12.1] at org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40) [flink-dist_2.12-1.12.1.jar:1.12.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_60] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]Caused by: java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.flink.connectors.hive.util.HivePartitionUtils.toHiveTablePartition(HivePartitionUtils.java:184) ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1] at org.apache.flink.connectors.hive.HiveTableSource$HiveContinuousPartitionFetcherContext.toHiveTablePartition(HiveTableSource.java:417) ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1] at org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:237) ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1] at org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:177) ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1] at org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$5(ExecutorNotifier.java:133) ~[flink-dist_2.12-1.12.1.jar:1.12.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_60] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_60] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_60] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_60] ... 3 more
it seems the partitoned field is not found in the source table field list.
Attachments
Issue Links
- links to