In the mutli binder scenario, make KafkaBinderConfigurationProperties conditional
so that it only creates such a bean in the correspodig multi binder context.
For normal cases, KafkaBinderConfigurationProperties bean is created by the main context.
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/815
There was an issue with Kafka Streams multi binders in which the properties were not scanned
properly. The last configuation is always won, wiping out any prevous enviroment properties.
Addressing this issue by properly keeping KafkaBinderConfigurationProperties per multi binder
environment and explicity invoking Boot properties binding on them.
Adding test to verify.
* Kafka Streams - DLQ control per consumer binding
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/800
* Fine-grained DLQ control and deserialization exception handlers per input binding
* Deprecate KafkaStreamsBinderConfigurationProperties.SerdeError in preference to the
new enum `KafkaStreamsBinderConfigurationProperties.DeserializationExceptionHandler`
based properties
* Add tests, modifying docs
* Addressing PR review comments
In the Kafka Streams binder, StreamsBuilderFactoryBean customzier was being called
prematurely before the object is created. Fixing this issue.
Add a test to verify
When the same metric name is repeated, there are some registry implementations such as the
micrometer Prometheus registry fail to register the duplicate entry. Fixing this issue by
restricting the duplicate metric names not to be registered.
Also, address an issue with multiple processors and metrics in the same application
by prepending the application ID of the Kafka Streams processor in the metric name itself.
Resolves#788
Spring Kafka provides a StreamsBuilderFactoryBeanCustomizer. Use this in the binder so that the
applicatons can plugin in such a bean to further customize the StreamsBuilderFactoryBean and KafkaStreams.
Resolves#784
In order to make Kafka Streams binder based function apps more consistent
with the wider functional support in ScSt, it should require the proprety
spring.cloud.stream.fucntion.definition to signal which functions to activate.
Resolves#783
* Allow the ability to plug in custom StreamPartitioner on the Kafka Streams producer.
* Fix a bug where the overriding of native encoding/decoding settings by the binder was not
workging properly. This fix is done by providing a custom ConfigurationPropertiesBindHandlerAdvisor.
* Add test to verify
Resolves#782
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/628
Allow users to specify the number of partitions in the dead letter topic.
If the property is set to 1 and the `minPartitionCount` binder property is 1,
override the default behavior and always publish to partition 0.
When more than one CompositeMessageConverter bean was defined in the same ApplicationContext,
not having the qualifiers on the injection points was causing the application to fail during
instantiation due to bean conflicts being raised. The injection points for CompositeMessageConverter
have been marked with the appropriate qualifier to inject the Spring Cloud CompositeMessageConverter.
Resolves#775
When native encoding is disabled, the conversion on outbound fails if the record
value is a null. Handle this scenario more graceful by allowing the record
to be sent downstream by skipping the conversion.
Resolves#774
When both binders are present, there were ambiguities in the way the binders
were reporting health status. If one binder does not have any bindings, the
total health status was reported as down. Fixing these ambiguiltes as below.
If both binders have bindings present and Kafka broker is reachable, report
the status as UP and the associated details. If one of the binder does not
have bindings, but Kafka broker can be reached, then that particular binder's
status will be marked as UNKNOWN and the overall status is reported as UP.
If Kafka broker is down, then both binders are reported as DOWN and
the overall status is marked as DOWN.
Resolves#552
When there are multiple Kafka Streams processors present, the health indicator
overwrites the previous processor's health info. Addressing this issue.
Resolves#771