The Amazon Kinesis Client Library for Java (Amazon KCL) enables Java developers to easily consume and process data from Amazon Kinesis.
ℹ️ Amazon Kinesis Client versions 1.x are not impacted.
Please open an issue if you have any questions.
- Provides an easy-to-use programming model for processing data using Amazon Kinesis
- Helps with scale-out and fault-tolerant processing
- Sign up for AWS — Before you begin, you need an AWS account. For more information about creating an AWS account and retrieving your AWS credentials, see AWS Account and Credentials in the AWS SDK for Java Developer Guide.
- Sign up for Amazon Kinesis — Go to the Amazon Kinesis console to sign up for the service and create an Amazon Kinesis stream. For more information, see Create an Amazon Kinesis Stream in the Amazon Kinesis Developer Guide.
- Minimum requirements — To use the Amazon Kinesis Client Library, you'll need Java 1.8+. For more information about Amazon Kinesis Client Library requirements, see Before You Begin in the Amazon Kinesis Developer Guide.
- Using the Amazon Kinesis Client Library — The best way to get familiar with the Amazon Kinesis Client Library is to read Developing Record Consumer Applications in the Amazon Kinesis Developer Guide.
After you've downloaded the code from GitHub, you can build it using Maven. To disable GPG signing in the build, use
this command: mvn clean install -Dgpg.skip=true
. Note: This command runs Integration tests, which in turn creates AWS
resources (which requires manual cleanup). Integration tests require valid AWS credentials need to be discovered at
runtime. To skip running integration tests, add -DskipITs
option to the build command.
For producer-side developers using the Kinesis Producer Library (KPL), the KCL integrates without additional effort. When the KCL retrieves an aggregated Amazon Kinesis record consisting of multiple KPL user records, it will automatically invoke the KPL to extract the individual user records before returning them to the user.
To make it easier for developers to write record processors in other languages, we have implemented a Java based daemon, called MultiLangDaemon that does all the heavy lifting. Our approach has the daemon spawn a sub-process, which in turn runs the record processor, which can be written in any language. The MultiLangDaemon process and the record processor sub-process communicate with each other over STDIN and STDOUT using a defined protocol. There will be a one to one correspondence amongst record processors, child processes, and shards. For Python developers specifically, we have abstracted these implementation details away and expose an interface that enables you to focus on writing record processing logic in Python. This approach enables KCL to be language agnostic, while providing identical features and similar parallel processing model across all languages.
The recommended way to use the KCL for Java is to consume it from Maven.
<dependency>
<groupId>software.amazon.kinesis</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-kinesis-client</artifactId>
<version>1.11.2</version>
</dependency>
- Introducing support for processing multiple kinesis data streams with the same KCL 2.x for java consumer application
-
To build a consumer application that can process multiple streams at the same time, you must implement a new interface called MultistreamTracker (https://github.com/awslabs/amazon-kinesis-client/blob/0c5042dadf794fe988438436252a5a8fe70b6b0b/amazon-kinesis-client/src/main/java/software/amazon/kinesis/processor/MultiStreamTracker.java)
-
MultistreamTracker will also publish various metrics around the current active streams being processed, the number of streams which are deleted at this time period or are pending deletion.
-
-
Behavior of shard synchronization is moving from each worker independently learning about all existing shards to workers only discovering the children of shards that each worker owns. This optimizes memory usage, lease table IOPS usage, and number of calls made to kinesis for streams with high shard counts and/or frequent resharding.
-
When bootstrapping an empty lease table, KCL utilizes the
ListShard
API's filtering option (the ShardFilter optional request parameter) to retrieve and create leases only for a snapshot of shards open at the time specified by theShardFilter
parameter. TheShardFilter
parameter enables you to filter out the response of theListShards
API, using theType
parameter. KCL uses theType
filter parameter and the following of its valid values to identify and return a snapshot of open shards that might require new leases.- Currently, the following shard filters are supported:
AT_TRIM_HORIZON
- the response includes all the shards that were open atTRIM_HORIZON
.AT_LATEST
- the response includes only the currently open shards of the data stream.AT_TIMESTAMP
- the response includes all shards whose start timestamp is less than or equal to the given timestamp and end timestamp is greater than or equal to the given timestamp or still open.
ShardFilter
is used when creating leases for an empty lease table to initialize leases for a snapshot of shards specified atRetrievalConfig#initialPositionInStreamExtended
.- For more information about ShardFilter, see the official AWS documentation on ShardFilter.
- Currently, the following shard filters are supported:
-
Introducing support for the
ChildShards
response of theGetRecords
and theSubscribeToShard
APIs to perform lease/shard synchronization that happens atSHARD_END
for closed shards, allowing a KCL worker to only create leases for the child shards of the shard it finished processing.- For shared throughout consumer applications, this uses the
ChildShards
response of theGetRecords
API. For dedicated throughput (enhanced fan-out) consumer applications, this uses theChildShards
response of theSubscribeToShard
API. - For more information, see the official AWS Documentation on GetRecords, SubscribeToShard, and ChildShard.
- For shared throughout consumer applications, this uses the
-
KCL now also performs additional periodic shard/lease scans in order to identify any potential holes in the lease table to ensure the complete hash range of the stream is being processed and create leases for them if required.
PeriodicShardSyncManager
is the new component that is responsible for running periodic lease/shard scans.- New configuration options are available to configure
PeriodicShardSyncManager
inLeaseManagementConfig
Name Default Description leasesRecoveryAuditorExecutionFrequencyMillis 120000 (2 minutes) Frequency (in millis) of the auditor job to scan for partial leases in the lease table. If the auditor detects any hole in the leases for a stream, then it would trigger shard sync based on leasesRecoveryAuditorInconsistencyConfidenceThreshold. leasesRecoveryAuditorInconsistencyConfidenceThreshold 3 Confidence threshold for the periodic auditor job to determine if leases for a stream in the lease table is inconsistent. If the auditor finds same set of inconsistencies consecutively for a stream for this many times, then it would trigger a shard sync - New CloudWatch metrics are also now emitted to monitor the health of
PeriodicShardSyncManager
:
Name Description NumStreamsWithPartialLeases Number of streams that had holes in their hash ranges. NumStreamsToSync Number of streams which underwent a full shard sync. - New configuration options are available to configure
-
Introducing deferred lease cleanup. Leases will be deleted asynchronously by
LeaseCleanupManager
upon reachingSHARD_END
, when a shard has either expired past the stream’s retention period or been closed as the result of a resharding operation.- New configuration options are available to configure
LeaseCleanupManager
.
Name Default Description leaseCleanupIntervalMillis 1 minute Interval at which to run lease cleanup thread. completedLeaseCleanupIntervalMillis 5 minutes Interval at which to check if a lease is completed or not. garbageLeaseCleanupIntervalMillis 30 minutes Interval at which to check if a lease is garbage (i.e trimmed past the stream's retention period) or not. - New configuration options are available to configure
-
Introducing experimental support for multistreaming, allowing a single KCL application to multiplex processing multiple streams.
- New configuration options are available to enable multistreaming in
RetrievalConfig#appStreamTracker
.
- New configuration options are available to enable multistreaming in
-
Fixing a bug in
PrefetchRecordsPublisher
restarting while it was already running. -
Including an optimization to
HierarchicalShardSyncer
to only create leases for one layer of shards. -
Adding support to prepare and commit lease checkpoints with arbitrary bytes.
- This allows checkpointing of an arbitrary byte buffer up to the maximum permitted DynamoDB item size (currently 400 KB as of release), and can be used for recovery by passing a serialized byte buffer to
RecordProcessorCheckpointer#prepareCheckpoint
andRecordProcessorCheckpointer#checkpoint
.
- This allows checkpointing of an arbitrary byte buffer up to the maximum permitted DynamoDB item size (currently 400 KB as of release), and can be used for recovery by passing a serialized byte buffer to
-
Upgrading version of AWS SDK to 2.14.0.
-
#725 Allowing KCL to consider lease tables in
UPDATING
healthy.