The property spring.cloud.stream.binder.configuration.num.stream.threads does not work and is silently ignored.
The property spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads works fine and is already covered by tests at MultipleFunctionsInSameAppTests#125
resolves#987
* Allow retries in Kafka Streams binder
Provide applications the capability to retry critical sections of the
business logic. This is accomplished through a new API using which
critical path can be wrapped inside a Callable.
Adding tests and docs.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/945
* Reworking the PR
Remove the binder API that was added before for retrying.
Reuse binding provided RetryTemplate.
Tests and docs
* Cleanup
* Addressing PR review comments
* Fix typo
* Custom DLQ Destination Resolver
Allow applications to provide a custom DLQ destination resolver
implementaiton by providing a new interface DlqDestinationResolver
as part of binder's public contract. This interface is a BiFunction
extension using which the applications can provide more fine grained
control over where to route records in error.
Adding test to verify.
Adding docs.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/966
* Add DlqDestinationResolver to MessageChannel based binder.
Tests and docs
Spring Kafka -> 2.6.3-SNAPSHOT
Spring Integration Kafka -> 5.4.0-SNAPSHOT
Kafka version -> 2.6.0
Use Kafka_2.13 for tests
Ungignore the Jaas security tests.
Unignore a few Kafka Streams binder tests.
If the user has not explicitly set the `SreamsConfig.REPLICATION_FACTOR_CONFIG`,
set it from the binder property.
This is used for infrastructure topics (change logs and repartition topics).
If both boot and binder level config for bootstrap servers are present, the boot
one always wins currently regardless of any binder settings, unless the boot one
evaluates to the default (localhost:9092). This is especially a problem in a
multi binder scenario. Addressing this issue by simplifying the evaluation and
always gives the binder config the highest precedence.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/967
* Kafka binder metrics improvements
KafkaBinderMetrics has a blocking call in which it waits for the default timeout of
60 seconds if Kafka broker is down. This happens for each topic within a consumer
group. Refactor this code, so that we have this check performed in a periodic task
and if the runtime check fails to return within a smaller timewindow (5 seconds),
return immediately by providing the latest value from the periodic task results.
Periodic task for computing the lags is run every 60 seconds.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/809
* Addressing PR review
* Customizing producer/consumer factories
Adding hooks by providing Producer and Consumer config
customizers to perform advanced configuration on the producer
and consumer factories.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/960
* Addressing PR review comments
* Further PR updates
* Deprecate autoCommitOffset in favor or ackMode
Deprecate autoCommitOffset in favor of using a newly introduced consumer property ackMode.
If the consumer is not in batch mode and if ackEachRecord is enabled, then container
will use RECORD ackMode. Otherwise, use the provided ackMode using this property.
If none of these are true, then it will defer to the default setting of BATCH ackMode
set by the container.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/877
* Address PR review comments
* Addressing PR review comments
* Fix health indicator to properly indicate partition failure
* Add new flag to control binder health indicator behavior
* Regardless of the consumer that is reading from a partition, if the binder
detects that a partition for the topic is without a leader, mark the binder
health as DOWN (if the flag is set to true).
* Remove synchronize block since only one thread executes the block
* Add Docs for the new binder flag
* Fix checkstyle issues
* Change default replication factor to -1
Binder now uses a default value of -1 for replication factor signaling the
broker to use defaults. Users who are on Kafka brokers older than 2.4,
need to set this to the previous default value of 1 used in the binder.
In either case, if there is an admin policy that requires replication factor > 1,
then that value must be used instead.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/808
* Addressing PR review comments
Following two properties are removed from the producer configs in the binder.
(ProducerConfig.RETRIES_CONFIG, 0)
(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432)
In the case of RETRIES, this is a bug to override the default in Kafka client (MAX_INT) and
in the latter case, this is unnecessary as this is the same default in the client.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/952
- In MultiBinder configuration, MeterRegistery is loaded in the outterContext,
hence removing the conditional on MeterRegistry bean check
- Fix checkstyle issues
* Kafka Streams metrics in the binder for Boot 2.2 users are streamlined to
reflect the Micrometer native support added with version 1.4.0 which is
available through Boot 2.3. While Boot 2.3 users will get this native support
from Micrometer, Boot 2.2 users will still rely on the custom implementation
in the binder. This commit aligns that custom implemenation more with
the native implementation.
* Disable the custom Kafka Streams metrics bean which is mentioned above
(KafkaStreamsBinderMetrics) when the application is on Boot 2.3, as this
implementation is only applicable for Boot 2.2.x.
* Update docs
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/880
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/935
Spring Cloud Function now checks if a converter supports the mime type before
calling it. Previously, the converter supported no mime types, so it was never
called, breaking Kafka Tombstone record processing (outbound).
The converter must support all mime types so it can perform a no-op conversion,
retaining the `KafkaNull`.
The abstract converter will return null whenever the payload is not a `KafkaNull`,
which is a signal to spring-cloud-function to try the next converter.
Kafka Streams topology actuator endpoint had a conflict with the JMX exporter
and was causing some IDE issues. Renming this endpoint to kafkastreamstopology.
Renaming the underlying methods in this actuator endpoint implentation.
Updating docs.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/895
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/914
Boot no longer uses the deprecated JMX MBean scraping provided by Micrometer.
Add configuration to add Micrometer Meters when Micrometer and spring-kafka 2.5.x are on
the class path.
Micrometer for Streams
- work around until SK 2.5.3