Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
1.12.0, 1.13.0
-
None
Description
2020-12-10T23:10:46.7788275Z Test testKafkaSourceSinkWithMetadata[legacy = false, format = csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase) is running. 2020-12-10T23:10:46.7789360Z -------------------------------------------------------------------------------- 2020-12-10T23:10:46.7790602Z 23:10:46,776 [ main] INFO org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - Creating topic metadata_topic_csv 2020-12-10T23:10:47.1145296Z 23:10:47,112 [ main] WARN org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property [transaction.timeout.ms] not specified. Setting it to 3600000 ms 2020-12-10T23:10:47.1683896Z 23:10:47,166 [Sink: Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, physical_2, physical_3, headers, timestamp]) (1/1)#0] WARN org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE semantic. 2020-12-10T23:10:47.2087733Z 23:10:47,206 [Sink: Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, physical_2, physical_3, headers, timestamp]) (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting FlinkKafkaInternalProducer (1/1) to produce into default topic metadata_topic_csv 2020-12-10T23:10:47.5157133Z 23:10:47,513 [Source: TableSourceScan(table=[[default_catalog, default_database, kafka]], fields=[physical_1, physical_2, physical_3, topic, partition, headers, leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> Sink: Select table sink (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - Consumer subtask 0 has no restore state. 2020-12-10T23:10:47.5233388Z 23:10:47,521 [Source: TableSourceScan(table=[[default_catalog, default_database, kafka]], fields=[physical_1, physical_2, physical_3, topic, partition, headers, leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> Sink: Select table sink (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - Consumer subtask 0 will start reading the following 1 partitions from the earliest offsets: [KafkaTopicPartition{topic='metadata_topic_csv', partition=0}] 2020-12-10T23:10:47.5387239Z 23:10:47,537 [Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, kafka]], fields=[physical_1, physical_2, physical_3, topic, partition, headers, leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> Sink: Select table sink (1/1)#0] INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - Consumer subtask 0 creating fetcher with offsets {KafkaTopicPartition{topic='metadata_topic_csv', partition=0}=-915623761775}. 2020-12-11T02:34:02.6860452Z ##[error]The operation was canceled.
This test started at 2020-12-10T23:10:46.7788275Z and has not been finished at 2020-12-11T02:34:02.6860452Z