You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Producer uses max.block.ms after receiving a batch to "block" from a send.
The default 60s might be too high for use-cases where a considerable amount of records is to be dropped.
Tested locally with retries => 2 :
send a batch of 3 records all failing (default max.block.ms) - took 9 minutes to drop
send a batch of 3 records all failing (with max.block.ms = 15_000) - took 2 minutes 15 seconds
send a batch of 3 records 1 failing (with max.block.ms = 15_000) - took 50 seconds :
[2021-04-13T11:45:01,296][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka version: 2.5.1
[2021-04-13T11:45:01,296][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka commitId: 0efa8fb0f4c73d92
[2021-04-13T11:45:01,296][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka startTimeMs: 1618307101288
[2021-04-13T11:45:01,488][INFO ][org.apache.kafka.clients.Metadata] [Producer clientId=kafkaoutputspec] Cluster ID: uJ9dBAanT9CuYwdvOI2uTA
[2021-04-13T11:45:16,510][INFO ][logstash.outputs.kafka ] producer send failed, will retry sending {:exception=>Java::OrgApacheKafkaCommonErrors::TimeoutException, :message=>"Topic topic333b not present in metadata after 15000 ms."}
[2021-04-13T11:45:16,511][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>3, :failures=>1, :sleep=>0.1}
[2021-04-13T11:45:31,616][INFO ][logstash.outputs.kafka ] producer send failed, will retry sending {:exception=>Java::OrgApacheKafkaCommonErrors::TimeoutException, :message=>"Topic topic333b not present in metadata after 15000 ms."}
[2021-04-13T11:45:31,617][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.1}
[2021-04-13T11:45:46,721][INFO ][logstash.outputs.kafka ] producer send failed, will retry sending {:exception=>Java::OrgApacheKafkaCommonErrors::TimeoutException, :message=>"Topic topic333b not present in metadata after 15000 ms."}
[2021-04-13T11:45:46,721][INFO ][logstash.outputs.kafka ] Exhausted user-configured retry count when sending to Kafka. Dropping these events. {:max_retries=>2, :drop_count=>1}
[2021-04-13T11:45:51,722][INFO ][org.apache.kafka.clients.producer.KafkaProducer] [Producer clientId=kafkaoutputspec] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
Producer uses
max.block.ms
after receiving a batch to "block" from asend
.The default 60s might be too high for use-cases where a considerable amount of records is to be dropped.
Tested locally with
retries => 2
:15_000
) - took 2 minutes 15 seconds15_000
) - took 50 seconds :https://kafka.apache.org/25/documentation.html#max.block.ms
The text was updated successfully, but these errors were encountered: