Description
create table TEST1(
V1 BIGINT,
S1 INT)
partitioned by (PK BIGINT)
clustered by (V1)
sorted by (S1)
into 200 buckets
STORED AS PARQUET;
insert into test1
select
* from values(1,1,1);
org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not part of the table columns ([FieldSchema(name:v1, type:bigint, comment:null), FieldSchema(name:s1, type:int, comment:null)]
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not part of the table columns ([FieldSchema(name:v1, type:bigint, comment:null), FieldSchema(name:s1, type:int, comment:null)]
Attachments
Issue Links
- links to
(3 links to)