Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
Impala 2.0
-
None
-
None
Description
Decreasing the default parquet file size exposed a preexisting bug where very wide tables could get tripped up because the parquet writer code assumes there is at least a few DATA_PAGE_SIZE bytes available per column.
F0925 05:07:48.850270 20556 hdfs-parquet-table-writer.cc:818] Check failed: file_size_limit_ > DATA_PAGE_SIZE * columns_.size() (6291456 vs. 131072000)
-
-
- Check failure stack trace: ***
@ 0x23fdded google::LogMessage::Fail()
@ 0x2401877 google::LogMessage::SendToLog()
@ 0x2400dd6 google::LogMessage::Flush()
@ 0x2401d0d google::LogMessageFatal::~LogMessageFatal()
@ 0x195d116 impala::HdfsParquetTableWriter::InitNewFile()
@ 0x18d6767 impala::HdfsTableSink::CreateNewTmpFile()
@ 0x18d778a impala::HdfsTableSink::InitOutputPartition()
@ 0x18daa3e impala::HdfsTableSink::GetOutputPartition()
@ 0x18d7bdd impala::HdfsTableSink::Send()
@ 0x181998a impala::PlanFragmentExecutor::OpenInternal()
@ 0x1818a77 impala::PlanFragmentExecutor::Open()
@ 0x17a69da impala::Coordinator::Wait()
@ 0x11db6d4 impala::ImpalaServer::QueryExecState::WaitInternal()
@ 0x11db0ce impala::ImpalaServer::QueryExecState::Wait()
- Check failure stack trace: ***
-