Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-24228

[FLIP-171] Firehose implementation of Async Sink

    XMLWordPrintableJSON

Details

    Description

      Motivation

      User stories:
      As a Flink user, I’d like to use Kinesis Firehose as sink for my data pipeline.

      Scope:

      • Implement an asynchronous sink for Kinesis Firehose by inheriting the AsyncSinkBase class. The implementation can for now reside in its own module in flink-connectors. The module and package name can be anything reasonable e.g. flink-connector-aws-kinesis for the module name and org.apache.flink.connector.aws.kinesis for the package name.
      • The implementation must use the Kinesis Java Client.
      • The implementation must allow users to configure the Kinesis Client, with reasonable default settings.
      • Implement an asynchornous sink writer for Firehose by extending the AsyncSinkWriter. The implementation must deal with failed requests and retry them using the requeueFailedRequestEntry method. If possible, the implementation should batch multiple requests (PutRecordsRequestEntry objects) to Firehose for increased throughput. The implemented Sink Writer will be used by the Sink class that will be created as part of this story.
      • Unit/Integration testing. Use Kinesalite (in-memory Kinesis simulation). We already use this in KinesisTableApiITCase.
      • Java / code-level docs.
      • End to end testing: add tests that hits a real AWS instance. (How to best donate resources to the Flink project to allow this to happen?)

      References

      More details to be found https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink

      Attachments

        Issue Links

          Activity

            People

              CrynetLogistics Zichen Liu
              CrynetLogistics Zichen Liu
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: