Instead of relying on a property based approach, use proper
multiple output bindings to support the branching feature.
This is accomplished through a combination of using `SendTo`
annotation with multiple outputs and overriding the default
StreamListener method setup orchestration in the binder.
Resolves#265
- Ensure that branching can be done and any message conversions
done on all outbound data to multiple topics
- Refine MessageConversion logic in KStream binder
- Refactor message conversion code into a specific class
- Cleanup and refactoring
- Test to verify branching support
- More refinements in the way nativeEncoding/nativeDecoding logic works in KStream binder
- Doc changes for KStream binder
Aligning semantics of native encoding with core spring cloud stream.
With this change, if nativeEncoding is set, any message conversion is
skipped by the framework. It is up to the application to ensure that
the data sent on the outbound is of type byte[], otherwise, it fails.
If the nativeEncoding is false, which is the default, the binder does
the message conversion before sending the data on the outbound.
Corresponding changes in tests
Resolves#267
Resolves#259
- Remove the kafka server dependency from build/binder artifact
- Remove AdminUtilOperation and KafkaAdminUtilOperation
- Rely on AdminClient for provisioning operations
- Update KStream components with the new changes
- Test updates
- Polishing
Add timeout to all the blocking AdminClient calls
Addressing PR review comments
Addressing PR review comments
Addressing PR review
Update SK, SIK to 2.1.0.RELEASE and 3.0.0.RELEASE respectively
Update Kafka Streams class name changes
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/251
When using native decoding, the message emitted by the binder has no `id` or `timestamp`
headers. This can cause problems if the first SI endpoint uses a JDBC message store,
for example.
Add the ability to configure whether these headers are populated as well as the ability
to inject a custom message converter.
Fixes#58Resolves#257
- add the ability to configure the dlq producer properties
- new property on KafkaConsumerProperties for dlqProducerProperties
- dlq sender refactoring in Kafka binder
- make dlq type raw so that non byte[] key/payloads can be sent
- add new tests for verifying dlq producer properties
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/236
- Restore the mapped headers configuration for embedding headers
- Configure the header mapper based on the header mode
- On the outbound side, set the mapper to `null` unless the header mode is not null and not `headers`
this suppresses the channel adapter from setting up headers - for compatibility with < 0.11 brokers
- On the inbound side, set the `BinderHeaders.NATIVE_HEADERS_PRESENT` header if native headers found
- Add a configuration capability allowing the user to provide his/her own header mapper
- Add a test case with 3 producers (embedded, native, and no headers) and a consumer to verify it can
consume all 3
Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1107
Polishing - fix else clause when configuring the header mapper.
Remove the separate test module introduced in 1.x to test different versions of Kafka.
In 2.0, there is a single Kafka version that needs to be tested.
Move all the tests from the test module to the main binder module.
Remove the confluent schema registry integration test from the binder tests as it
will be ported as a sample application. This test currently does the serialziation/deserialition twice.
Fix#200
fixes spring-cloud/spring-cloud-stream-binder-kafka#193
Integration missed commits and provide some polishing, improvements and fixes
Remove `resetOffsets` option
Fix#170
Use parent version for spring-cloud-build-tools
Add update version script
Fixes for consumer and producer property propagation
Fix#142#129#156#162
- Remove conditional configuration for Boot 1.4 support
- Filter properties before creating consumer and producer property sets
- Restore `configuration` as Map<String,String> for fixing Boot binding
- Remove 0.9 tests
SCSt-GH-913: Error Handling via ErrorChannel
Relates to spring-cloud/spring-cloud-stream#913Fixes#162
- configure an ErrorMessageSendingRecoverer to send errors to an error channel, whether or not retry is enabled.
Change Test Binder to use a Fully Wired Integration Context
- logging handler subscribed to errorChannel
Rebase; revert s-k to 1.1.x, Kafka to 0.10.1.1
Remove dependency overrides.
POM structure corrections
- move all intra-project deps to dependency management
- remove redundant overrides of Spring Integration Kafka
Remove reference to deleted module
- `spring-cloud-stream-binder-kafka-test-support` was previously
removed, but it was still added as an unused dependency to the
project
Remove duplicate debug statement.
unless you really really want to make sure users see this :)
GH-144: Add Kafka Streams Binder
Fixspring-cloud/spring-cloud-stream-binder-kafka#144
Addressing some PR reviews
Remove java 8 lambada expressions from KStreamBoundElementFactory
Initial - add support for serdes per binding
Fixing checkstyle issues
test source 1.8
Convert integration tests to use Java 7
Internal refactoring
Remove payload serde code in KStreamBoundElementFactory and reuse it from core
Addressing PR comments
cleanup around payload deserialization
Update to latest serialization logic
Extract common properites class for KStream producer/consumer
Addressing PR review comments
* Remove redundant dependencies for KStream Binder
Documentation for KStream binder
* Documentation for KStream binder
Fix#160
* Addressing PR review comments
* Addressing PR review comments
* Addressing PR review comments
Fixes#181
SCSt-GH-916: Configure Producer Error Channel
Requires: https://github.com/spring-cloud/spring-cloud-stream/pull/1039
Publish send failures to the error channel.
Add docs
Revert to Spring Kafka 1.1.6
GH-62: Remove Tuple Kryo Registrar Wrapper
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/62
No longer needed.
GH-169: Use the Actual Partition Count (Producer)
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/169
If the configured `partitionCount` is less than the physical partition count on an existing
topic, the binder emits this message:
The `partitionCount` of the producer for topic partJ.0 is 3, smaller than the actual partition count of 8 of the topic.
The larger number will be used instead.
However, that is not true; the configured partition count is used.
Override the configured partition count with the actual partition count.
0.11 Binder
Initial Commit
- Transactional Binder
Version Updates
- Headers support
KStreams and 0.11
GH-188: KStream Binder Properties
KStream binder: support class for application level properties
Provide commonly used KStream application properties for convenient access at runtime
Fix#188
Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.
Kstream binder: producer default Serde changes
Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.
Fix#190
KStream binder cleanup,
merge cleanup
re-update kafka version
2.0 related changes
Fix tests
Upgrade Kstream tests
converting anonymous classes to lambda expressions
Renaming Kafka-11 qualifier from test module
Refactoring test class names
cleanup
adding .jdk8 files
Fix KafkaBinderMetrics in 2.0
Fix#199
Addressing PR review comments
Addressing PR review comments