Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.6.1
-
None
-
None
Description
creating DirectKafkaInputDStream with param spark.streaming.kafka.maxRatePerPartition for compacted topic cause exception:
ERROR [Executor task launch worker-2] executor.Executor: Exception in task 1.0 in stage 2.0 (TID 22)
java.lang.AssertionError: assertion failed: Got 3740923 > ending offset 2428156 for topic COMPACTED.KAFKA.TOPIC partition 6 start 2228156. This should not happen, and indicates a message may have been skipped
at scala.Predef$.assert(Predef.scala:179)
at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator.getNext(KafkaRDD.scala:217)
as KafkaRDD expect maxOffset in batch <= startOffset + maxRatePerPartition*secondsInBatch. While for compacted topic some offsets can be missing.
Attachments
Issue Links
- duplicates
-
SPARK-17147 Spark Streaming Kafka 0.10 Consumer Can't Handle Non-consecutive Offsets (i.e. Log Compaction)
- Resolved