Compare commits

..

113 Commits
3.1.x ... main

Author SHA1 Message Date
Soby Chacko
31ea106834 Update issue templates 2022-05-11 11:57:10 -04:00
Soby Chacko
02b30bf430 Update issue templates 2022-05-11 11:55:12 -04:00
Soby Chacko
9aac07957c Update issue templates 2022-05-11 11:53:09 -04:00
Soby Chacko
63a7da1b9e Update issue templates 2022-05-11 11:51:30 -04:00
Soby Chacko
2f986495fa Update README.adoc 2022-03-30 11:57:09 -04:00
Soby Chacko
382a3c1c81 Update README.adoc 2022-03-30 11:54:30 -04:00
Soby Chacko
3045019398 Version downgrades
Temporily downgrade SK to 3.0.0-M1 and Kafka client to 3.0.0
2022-02-22 17:47:26 -05:00
Soby Chacko
e848ec8051 Remove call to deprecated method in Spring Kafka 2022-02-15 11:42:46 -05:00
Gary Russell
c472b185be GH-1195: Fix Pause/Resume Documentation
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1195

Remove obsolete documentation.

**cherry-pick to 3.2.x**
2022-02-07 10:49:19 -05:00
Soby Chacko
235146e29a Revert "Revert "Update documented version of kafka-clients""
This reverts commit 65960e101f.
2022-01-27 10:53:54 -05:00
Soby Chacko
8eecf03827 Revert "Revert "Update Kafka client version to 3.1.0""
This reverts commit 01dbb49313.
2022-01-27 10:53:27 -05:00
Soby Chacko
3f1f7dbbe8 Revert "Revert "Fixed invalid java code snippet""
This reverts commit d2c12873d0.
2022-01-27 10:52:22 -05:00
buildmaster
45245f4b92 Going back to snapshots 2022-01-27 15:39:18 +00:00
buildmaster
9922922036 Update SNAPSHOT to 4.0.0-M1 2022-01-27 15:37:22 +00:00
Oleg Zhurakousky
c3b6610c9f Update SK and SIK version 2022-01-27 16:24:05 +01:00
Oleg Zhurakousky
d2c12873d0 Revert "Fixed invalid java code snippet"
This reverts commit 4cbcb4049b.
2022-01-27 16:16:26 +01:00
Oleg Zhurakousky
01dbb49313 Revert "Update Kafka client version to 3.1.0"
This reverts commit a25e2ea0b3.
2022-01-27 16:16:10 +01:00
Oleg Zhurakousky
65960e101f Revert "Update documented version of kafka-clients"
This reverts commit 69e377e13b.
2022-01-27 16:16:02 +01:00
Jay Lindquist
69e377e13b Update documented version of kafka-clients
It looks like this has been incorrect for a few versions. The main branch is currently pulling in 3.1.0

https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/blob/main/pom.xml#L26
2022-01-26 11:53:37 -05:00
Soby Chacko
a25e2ea0b3 Update Kafka client version to 3.1.0
Test changes
2022-01-25 17:18:04 -05:00
Rex Ijiekhuamen
4cbcb4049b Fixed invalid java code snippet 2022-01-24 10:22:32 -05:00
Soby Chacko
310683987a Temporarily disabling a test 2022-01-19 12:46:23 -05:00
Soby Chacko
cd02be57e5 Test package changes 2022-01-18 15:42:47 -05:00
Soby Chacko
ee888a15ba Fixing test 2022-01-18 12:59:47 -05:00
Soby Chacko
417665773c Merge branch '4.x' into main 2022-01-18 11:50:45 -05:00
Soby Chacko
d345ac88b1 Enable custom binder health check impelementation
Currently, KafkaBinderHealthIndicator is not customizable and included by default
when Spring Boot actuator is on the classpath. Fix this by allowing the application
to provide a custom implementation. A new marker interface called KafkaBinderHealth
can be used by the applicaiton to provide a custom HealthIndicator implementation, in
which case, the binder's default implementation will be excluded.

Tests and docs changes.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1180
2022-01-13 12:06:37 -05:00
Soby Chacko
577ffbb67f Enable custom binder health check impelementation
Currently, KafkaBinderHealthIndicator is not customizable and included by default
when Spring Boot actuator is on the classpath. Fix this by allowing the application
to provide a custom implementation. A new marker interface called KafkaBinderHealth
can be used by the applicaiton to provide a custom HealthIndicator implementation, in
which case, the binder's default implementation will be excluded.

Tests and docs changes.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1180
2022-01-13 09:56:32 -05:00
Soby Chacko
31b91f47e4 Fixing InteractiveQueryService test.
Fixing checkstyle issues.
2022-01-12 13:03:12 -05:00
Soby Chacko
df9d04fd12 Retries for HostInfo in InteractiveQueryService
InteractiveQueryService methods for finding the host info for Kafka Streams
currently throw exceptions if the underlying KafkaStreams are not ready yet.
Introduce a retry mechanism so that the users can control the behaviour of
these methods by providing the following properties.

spring.cloud.stream.kafka.streams.binder.stateStoreRetry.maxAttempts (default 1)
spring.cloud.stream.kafka.streams.binder.stateStoreRetry.backoffPeriod (default 1000 ms).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1185
2022-01-11 18:57:00 -05:00
Soby Chacko
3770db7844 Retries for HostInfo in InteractiveQueryService
InteractiveQueryService methods for finding the host info for Kafka Streams
currently throw exceptions if the underlying KafkaStreams are not ready yet.
Introduce a retry mechanism so that the users can control the behaviour of
these methods by providing the following properties.

spring.cloud.stream.kafka.streams.binder.stateStoreRetry.maxAttempts (default 1)
spring.cloud.stream.kafka.streams.binder.stateStoreRetry.backoffPeriod (default 1000 ms).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1185
2022-01-11 18:49:23 -05:00
Eduard Domínguez
e512b7a2c6 Fix: KeySerde setup not using expected key type headers
checkstyle fixes
2022-01-11 14:54:33 -05:00
Eduard Domínguez
406e20f19c Fix: KeySerde setup not using expected key type headers
checkstyle fixes
2022-01-11 14:37:20 -05:00
Soby Chacko
da9bc354e4 StreamListener docs cleanup.
Fixing Kafka Streams composition tests due to an application.id issue.
2022-01-07 20:04:55 -05:00
Soby Chacko
3cc3680f63 Event type routing improvements (Kafka Streams)
When routing by event types, the deserializer omits the
topic and header information. Fixing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1186
2022-01-05 19:33:35 -05:00
Soby Chacko
648188fc6b Event type routing improvements (Kafka Streams)
When routing by event types, the deserializer omits the
topic and header information. Fixing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1186
2022-01-05 19:30:36 -05:00
Soby Chacko
d1a9eab14b Default maven-antrun version 2022-01-04 14:53:40 -05:00
Soby Chacko
1cdfb962c9 checkstyle fix 2022-01-03 19:00:19 -05:00
Soby Chacko
fb03d2ae8e Version upgrades
4.0.0-SNAPSHOT
  Spring Kafka - 3.0.0-SNAPSHOT
  Spring Integraton Kafka - 6.0.0-SNAPSHOT
  Spring Cloud Stream - 4.0.0-SNAPSHOT

Code changes for Jakarta
2022-01-03 19:00:12 -05:00
Eduard Domínguez
921b47d1e4 GH-1176: KeyValueSerdeResolver improvements
Use extended properties when initializing Consumer and Producer Serdes.

Updated copyright years and authors.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1176
2022-01-03 19:00:05 -05:00
Soby Chacko
be474f643a KafkaStreams binder health check improvements
Allow health checks on KafkaStreams processors that are currently stopped through
actuator bindings endpoint. Add this only as an opt-in feature through a new binder
level property - includeStoppedProcessorsForHealthCheck which is false by default
to preserve the current health indicator behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
Resolves #1175
2022-01-03 18:59:57 -05:00
Soby Chacko
0be87c3666 New tips-tricks-recipes section in docs
Migrate the recipe section in Spring Cloud Stream Samples
repository as Tips, Tricks and Receipes in Kafka binder main docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1173
Resolves #1174
2022-01-03 18:59:38 -05:00
Soby Chacko
e7bf404fce Fix PartitioningInterceptor CCE
The newly added DefaultPartitioningInteceptor must be explicitly
checked in order to avoid a CCE.

Related to resolving https://github.com/spring-cloud/spring-cloud-stream/issues/2245

Specifically for this: https://github.com/spring-cloud/spring-cloud-stream/issues/2245#issuecomment-977663452
2022-01-03 18:59:30 -05:00
Soby Chacko
e9a8b4af7e GH-1170: Schema registry certificates
Move classpath: resources provided as schema registry certificates
into a local file system location.

Adding test and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1170
2022-01-03 18:59:23 -05:00
Pommerening, Nico
f9dfbe09f7 GH-1161: InteractiveQueryService improvements
This PR safe guards state store instances in case there are multiple KafkaStreams
instances present that have distinct application IDs but share State Store Names.

Change is backwards compatible: In case no KafkaStreams association of the thread
can be found, all local state stores are queried as before.

In case an associated KafkaStreams Instance is found, but required StateStore is
not found in this instance, a warning is issued but backwards compatibility is
preserved by looking up all state stores.

Store within KafkaStreams instance of thread is preferred over "foreign" store with same name.

Warning is issued if requested store is not found within KafkaStreams instance of thread.

The main benefit here is to get rid of randomly selecting stores across all KafkaStreams instances
in case a store is contained within multiple streams instances with same name.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1161
2022-01-03 18:59:15 -05:00
Eduard Domínguez
63b306d34c GH-1176: KeyValueSerdeResolver improvements
Use extended properties when initializing Consumer and Producer Serdes.

Updated copyright years and authors.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1176
2021-12-10 14:02:29 -05:00
buildmaster
5cd8e06ec6 Bumping versions to 3.2.2-SNAPSHOT after release 2021-12-01 16:58:07 +00:00
buildmaster
79be11c9e9 Going back to snapshots 2021-12-01 16:58:07 +00:00
buildmaster
fc4358ba10 Update SNAPSHOT to 3.2.1 2021-12-01 16:55:37 +00:00
buildmaster
f3d2287b70 Bumping versions to 3.2.1-SNAPSHOT after release 2021-12-01 13:16:23 +00:00
buildmaster
220ae98bcc Going back to snapshots 2021-12-01 13:16:23 +00:00
buildmaster
bd3eebd897 Update SNAPSHOT to 3.2.0 2021-12-01 13:14:14 +00:00
Soby Chacko
ed8683dcc2 KafkaStreams binder health check improvements
Allow health checks on KafkaStreams processors that are currently stopped through
actuator bindings endpoint. Add this only as an opt-in feature through a new binder
level property - includeStoppedProcessorsForHealthCheck which is false by default
to preserve the current health indicator behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
Resolves #1175
2021-12-01 10:51:39 +01:00
Soby Chacko
60b6604988 New tips-tricks-recipes section in docs
Migrate the recipe section in Spring Cloud Stream Samples
repository as Tips, Tricks and Receipes in Kafka binder main docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1173
Resolves #1174
2021-12-01 10:36:31 +01:00
Oleg Zhurakousky
a3e76282b4 Merge pull request #1172 from sobychacko/fix-partitioning-interceptor
Fix PartitioningInterceptor CCE
2021-11-29 18:12:59 +01:00
Soby Chacko
c9687189b7 Fix PartitioningInterceptor CCE
The newly added DefaultPartitioningInteceptor must be explicitly
checked in order to avoid a CCE.

Related to resolving https://github.com/spring-cloud/spring-cloud-stream/issues/2245

Specifically for this: https://github.com/spring-cloud/spring-cloud-stream/issues/2245#issuecomment-977663452
2021-11-24 13:16:00 -05:00
Soby Chacko
5fcdf28776 GH-1170: Schema registry certificates
Move classpath: resources provided as schema registry certificates
into a local file system location.

Adding test and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1170
2021-11-23 18:23:55 -05:00
Soby Chacko
d334359cd4 Update spring-kafka to 2.8.0 Release 2021-11-23 10:22:18 -05:00
Pommerening, Nico
aff0dc00ef GH-1161: InteractiveQueryService improvements
This PR safe guards state store instances in case there are multiple KafkaStreams
instances present that have distinct application IDs but share State Store Names.

Change is backwards compatible: In case no KafkaStreams association of the thread
can be found, all local state stores are queried as before.

In case an associated KafkaStreams Instance is found, but required StateStore is
not found in this instance, a warning is issued but backwards compatibility is
preserved by looking up all state stores.

Store within KafkaStreams instance of thread is preferred over "foreign" store with same name.

Warning is issued if requested store is not found within KafkaStreams instance of thread.

The main benefit here is to get rid of randomly selecting stores across all KafkaStreams instances
in case a store is contained within multiple streams instances with same name.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1161
2021-11-18 14:08:22 -05:00
Oleg Zhurakousky
ba122bd39d Changes related to GH-2245 from core 2021-11-16 14:52:58 +01:00
Oleg Zhurakousky
7840decc86 Changes related to GH-2245 from core 2021-11-16 14:43:43 +01:00
Soby Chacko
486469da51 Kafka binder test migration
- EnableBinding to functional
2021-11-11 16:34:22 -05:00
Soby Chacko
ed98f1129d Kafka Streams binder deprecated component removals 2021-11-09 14:07:23 -05:00
Soby Chacko
c32be995f6 4.0.x changes for Kafka Streams tests
Migrating StreamListener based Kafka Streams binder tests to
use the funcitonal model
2021-11-08 19:21:49 -05:00
buildmaster
d37cbc79d7 Going back to snapshots 2021-11-03 09:32:29 +00:00
buildmaster
7a03eeed02 Update SNAPSHOT to 3.2.0-RC1 2021-11-03 09:31:02 +00:00
Oleg Zhurakousky
3a88839a5f Disable 'testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest' temporary 2021-11-03 10:17:03 +01:00
Soby Chacko
c9a07729dd Version updates and compile fixes
Spring Kafka: 2.8.0-RC1
Spring Integration: 5.5.5
Kafka Clients: 3.0.0

Remove schema registry references.
Updates for removed classes/deprecations in Kafka Streams client.
2021-11-01 22:11:06 -04:00
Soby Chacko
07f10f6eb5 Cleaning up
* Disconnect spring-cloud-scheam-registry-client from Kafka Streams binder
  that is used for testing
* Deprecate MessageConverterDelegateSerde
* Remove MessageConverterDelegateSerdeTest
2021-10-28 19:46:46 -04:00
Gary Russell
6fdc663349 GH-1031: Retry/DLQ in Binder or Container
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1031

Provide a mechanism to provide retry/dlq configuration when adding a custom error handler,
effectively moving this functionality from the binder to the container.

Address PR comments; fix test to use embedded broker.
2021-10-25 15:10:21 -04:00
buildmaster
f3a954fad7 Going back to snapshots 2021-10-19 09:29:32 +00:00
buildmaster
7be0f1be23 Update SNAPSHOT to 3.2.0-M3 2021-10-19 09:28:11 +00:00
Oleg Zhurakousky
0b687ad0ab Update SK to 2.8.0-RC1 2021-10-19 11:16:55 +02:00
Soby Chacko
c0bece64bd README cleanup
The overview doc gets unncessarily copied to the GitHub
repository root README. This only needs to reside in the
reference manual docs. Cleaning the process that generates
the root README for the repository.
2021-10-12 17:20:33 -04:00
Gary Russell
2efd29fb27 GH-1138: HealthIndicator Improvements
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1138

Don't report DOWN if a container is stopped normally.
This is a valid state when containers are not auto-startup or are stopped while
the app remains running.

Containers are stopped abnormally when
- a listener throws an `Error`
- a `CommonContainerStoppingErrorHandler` (or similar) is configured to stop the
  container after an error.
2021-10-04 17:13:01 -04:00
Soby Chacko
82a3306cb9 GH-1157: Issues with Kafka Streams and Kotlin
Kafka Streams binder erroneously tries to parse regular
non Kafka streams Kotlin function registrations. Ignore
function beans ending in _registration in Kafka Streams binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1157
2021-10-01 11:09:58 -04:00
buildmaster
af0485e241 Going back to snapshots 2021-10-01 14:13:49 +00:00
buildmaster
ea8912b011 Update SNAPSHOT to 3.2.0-M2 2021-10-01 14:12:23 +00:00
Oleg Zhurakousky
d76d916970 Upgrade spring-kafka 2021-10-01 13:48:08 +02:00
Soby Chacko
ac0e462ed2 Doc clarification for Kafka Streams binder prefix
Polishing the docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1134
2021-09-28 18:07:52 -04:00
Soby Chacko
bd1b49222c GH-1156: Kafka Streams binder composition issues
When both regular Kafka and Kafka Streams functions are present,
the code that was added recently for function composition in
Kafka Streams binder was accidentally creating a binadable proxy
factory bean for non Kafka Streams functions. Resolving this issue.

Resovles https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1156
2021-09-28 15:43:06 -04:00
Soby Chacko
9fd16416d6 GH-1149: Kafka Streams global config issues
When there are multiple functions, streamConfigGlobalProperties are overriden
for subsequent functions, after a binding specific config takes effect in a function.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1149
2021-09-24 10:01:26 -04:00
bono007
a4c38b3453 GH-1152: Property binding in Kafka Streams binder
Add default mappings provider for Kafka Streams (move kafka streams default mapping to new provider)

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1152
2021-09-23 13:19:35 -04:00
Soby Chacko
a8c948a6b2 GH-1148: Native changes required
KafkaBinderConfiguration class needs more pruning for native compilation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1148
2021-09-21 18:22:33 -04:00
Soby Chacko
d57091d791 GH-1145: Remove destroying producer factory
Remove the un-ncessary call to destroy the producer when checking for partitions.
This way, the producer is cached and reused at the first time data is produced.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1145
2021-09-16 14:46:37 -04:00
Gary Russell
2b7f6ecb96 Fix Doc Typos for KafkaBindingRebalanceListener 2021-09-15 17:35:17 -04:00
Soby Chacko
53d32c2332 GH-1140: CommonErrorHandler per consumer binding (#1143)
* GH-1140: CommonErrorHandler per consumer binding

Setting CommonErrorHandler on consumer binding through its bean name.
If present, binder will resolve this bean and assign it on the listener
container.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1140

* Addressing PR review
2021-09-14 10:17:32 -04:00
Soby Chacko
b7ebc185e7 GH-1141: Streams cleanup docs
Clarify streams cleanup default in the docs.
Effective from Spring Kafka 2.7, no cleanup will be performed on shutdown.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1141
2021-09-13 16:35:36 -04:00
Soby Chacko
56e25383f8 Fix wrong property in docs
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1137
2021-09-03 15:38:50 -04:00
Soby Chacko
b4c7c36229 GH-1135: Disable container retries when no DLQ set (#1136)
* GH-1135: Disable container retries when no DLQ set

Disable default container retries when binding retries are enabled
(maxAttempts > 1) and no DLQ set.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1135

* Address PR review comments

* Addressing PR review
2021-09-01 13:29:36 -04:00
Soby Chacko
6eed115cc9 Kafka Streams binder tests cleanup 2021-08-27 20:03:56 -04:00
Soby Chacko
8e6d07cc7b Update Kafka Streams branching docs/tests
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1133
2021-08-27 18:16:42 -04:00
Soby Chacko
adde49aab3 Refactoring Kafka Streams join tests
Coalesce the stream join tests in Kafka Streams binder around the functional model.
Remove duplicating the same join tests using the StreamListener model.
2021-08-25 15:45:38 -04:00
Soby Chacko
e500138486 Consumer/Producer prefix in Kafka Streams binder (#1131)
* Consumer/Producer prefix in Kafka Streams binder

Kafka Streams allows the applications to separate the consumer and producer
properties using consumer/producer prefixes. Add these prefixes automatically
if they are missing from the application.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1065

* Addressing PR review comments
2021-08-25 13:57:42 -04:00
Soby Chacko
970bac41bb Update pom.xml
Make the maven configuraiton consistent with core Spring Cloud Stream
2021-08-24 16:54:19 -04:00
Soby Chacko
afe39bf78a Kafka Streams binding lifecycle changes
Start Kafka Streams bindgings only when they are not running.
Similarly, stop them only if they are running.

Without these guards in the bindings for KStream, KTable and GlobalKTable,
it may cause NPE's due to the backing concurrent collections in KafkaStreamsRegistry
not finding the proper KafkaStreams object, especially when the StreamsBuilderFactory
bean is already stopped through the binder provided manager.
2021-08-24 16:53:23 -04:00
Derek Eskens
c1ad3006e9 Fixing consumer config typo in overview.adoc
ConsumerConfigCustomizer is the correct name of the interface.
2021-08-23 19:33:15 -04:00
Soby Chacko
ba2c3a05c9 GH-1129: Kafka Binder Metrics Improvements
Avoid blocking committed() call in KafkaBinderMetrics in a loop for
each topic partition.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1129
2021-08-23 16:14:45 -04:00
Soby Chacko
739b499966 Honoring auto startup in Kafka Streams binder (#1127)
* Honoring auto startup in Kafka Streams binder

When using Kafka Streams bineder, the processors are started
unconditionally, i.e. autoStartup is always true by default.
If spring.kafka.streams.auto-startup is set, then honor that
as the auto-startup flag in the binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1126

* Addressing PR review comments

Auto startup flag is honored per individual consumer bindings as well.

* Addressing PR review comments

* Making KafkaStreamsRegistry#kafkaStreams set to use a concurrent Set.
2021-08-23 14:18:05 -04:00
Soby Chacko
1b26f5d629 GH-1112: Fix OOM in DlqSender.sendToDlq
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1112
2021-08-20 14:57:15 -04:00
Soby Chacko
99c323e314 Document Kafka Streams and Sleuth integration
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1102
2021-08-16 20:45:08 -04:00
Soby Chacko
f4d3715317 Recycle KafkaStreams Objects
In the event Kafka Streams bindings are restarted (stop/start)
using the actuator bindings endpoints, the underlying KafkaStreams
objects are not recycled. After restarting, it still sees the previous
KafkaStreams object. Addressing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1119
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1120
2021-08-12 15:47:40 -04:00
Soby Chacko
d7907bbdcc Docs updates for Kakfa Streams skipAndContinue 2021-08-12 14:43:22 -04:00
buildmaster
9b5e735f74 Going back to snapshots 2021-07-30 15:12:01 +00:00
buildmaster
201668542b Update SNAPSHOT to 3.2.0-M1 2021-07-30 15:10:38 +00:00
Soby Chacko
a5f01f9d6f GH-1110: Kafka Streams state machine changes
Restore Kafka Streams error state behavior in the binder, equivalent
to Kafka Streams prior to 2.8. Starting with 2.8, users can customize
the way uncaught errors are interpreted via an UncaughtExceptionHandler
in the applicaiton. Binder now sets a default handler that shuts down
the Kafka Streams client if thre is an uncaught error.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
2021-07-29 13:35:08 -04:00
Soby Chacko
912c47e3ac Fix failing tests in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1109
2021-07-28 18:41:22 -04:00
Soby Chacko
001882de4e Upgrade versions
Spring Kafka: 2.8.0-M1
Spring Integration Kafka: 5.5.2
Kafka: 2.8.0

Ignore a few Kafka Streams binder tests temporarily.
2021-07-27 20:01:08 -04:00
Soby Chacko
54ac274ea3 GH-1085: Allow KTable binding on the outbound
At the moment, Kafka Streams binder only allows KStream bindings on the outbound.
There is a delegation mechanism in which we stil can use KStream for output binding
while allowing the applications to provide a KTable type as the function return type.

Update docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1085

Resolves #1105
2021-07-16 09:46:04 +02:00
Soby Chacko
80b707e5e9 Fixing Kafka binder tests.
* Adding a delay in testDlqWithNativeSerializationEnabledOnDlqProducer to avoid a race condition.
  Awaitility is used to wait for the proper condition in this test.

* In the MicroMeter registry tests, properly wait for the first message to arrive on the outbound
  so that the producer factory gets a chance to add the MicroMeter producer listener completely
  before the test assertions start.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1103
2021-07-14 18:54:16 -04:00
Soby Chacko
13474bdafb Fixing Kafka Binder tests 2021-07-13 19:39:04 -04:00
Soby Chacko
d0b4bdf438 Revert "Temporarily disabling KafkaBinderTests"
This reverts commit 0bedc606ce.
2021-07-13 15:44:28 -04:00
Soby Chacko
0bedc606ce Temporarily disabling KafkaBinderTests 2021-07-13 15:03:40 -04:00
Oleg Zhurakousky
b5cb32767b Fix POMs for 3.2 2021-07-13 16:50:01 +02:00
75 changed files with 2602 additions and 5628 deletions

10
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,10 @@
---
name: Bug report
about: Create a report to help us improve
title: Please create new issues in https://github.com/spring-cloud/spring-cloud-stream/issues
labels: ''
assignees: ''
---
Please create all new issues in https://github.com/spring-cloud/spring-cloud-stream/issues. The Kafka binder repository has been relocated to the core Spring Cloud Stream repo.

View File

@@ -14,7 +14,7 @@ Edit the files in the src/main/asciidoc/ directory instead.
image::https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka.svg?style=svg["CircleCI", link="https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka/branch/{github-tag}/graph/badge.svg["codecov", link="https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.svg[Gitter, link="https://gitter.im/spring-cloud/spring-cloud-stream-binder-kafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge"]
image::https://badges.gitter.im/spring-cloud/spring-cloud-stream.svg[Gitter, link="https://gitter.im/spring-cloud/spring-cloud-stream?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge"]
// ======================================================================================
@@ -26,6 +26,12 @@ It contains information about its design, usage, and configuration options, as w
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== ANNOUNCEMENT
**IMPORTANT: This repository is now migrated as part of core Spring Cloud Stream - https://github.com/spring-cloud/spring-cloud-stream.
Please create new issues over at the core repository.**
== Apache Kafka Binder
=== Usage
@@ -50,812 +56,20 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
</dependency>
----
=== Overview
== Apache Kafka Streams Binder
The following image shows a simplified diagram of how the Apache Kafka binder operates:
=== Usage
.Kafka Binder
image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"]
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation.
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
In addition to support known Kafka producer properties, unknown producer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`.
Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas.
Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum.
+
Default: `-1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader::
Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader.
+
Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
This property is deprecated as of 3.1 in favor of using `ackMode`.
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
+
Default: `false`.
autoCommitOffset::
Starting with version 3.1, this property is deprecated.
See `ackMode` for more details on alternatives.
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
ackMode::
Specify the container ack mode.
This is based on the AckMode enumeration defined in Spring Kafka.
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
autoCommitOnError::
In pollable consumers, if set to `true`, it always auto commits on error.
If not set (the default) or false, it will not auto commit in pollable consumers.
Note that this property is only applicable for pollable consumers.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
See <<reset-offsets>> for more information about this property.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
See <<reset-offsets>> for more information about this property.
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`.
If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply.
i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`.
By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`.
These defaults can be overridden by setting the `startOffset` binding property.
There will be no committed offset(s) the first time the binding is started with a particular `group`.
The other condition where no committed offset exists is if the offset has been expired.
With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group.
See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information.
When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored.
Following are two use cases when this might be used.
1. Consuming from a compacted topic containing key/value pairs.
Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions.
2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running.
Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions.
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets: true` is ignored.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
In the case of a regular processor (`Function<String, String>` or `Function<Message<?>, Message<?>`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below.
`spring.cloud.stream.kafka.bindings.<output-binding-name>.producer.messageKeyExpression: headers['kafka_receivedMessageKey']`
There is an important caveat to keep in mind for reactive functions.
In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages.
You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
closeTimeout::
Timeout in number of seconds to wait for when closing the producer.
+
Default: `30`
allowNonTransactional::
Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process.
This property allows you to override that behavior.
If set to true, records published to this output binding will not be run in a transaction, unless one is already in process.
+
Default: `false`
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
[source,xml]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
=== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Micrometer library.
The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application.
The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application.
```
@Component
class NoOpBindingMeters {
NoOpBindingMeters(MeterRegistry registry) {
registry.config().meterFilter(
MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME));
}
}
```
More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here].
[[kafka-tombstones]]
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
====
[source, java]
----
@StreamListener(Sink.INPUT)
public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
@Payload(required = false) Customer customer) {
// customer is null if a tombstone record
...
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsusumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method.
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
[[admin-client-config-customization]]
=== Customizing AdminClient Configuration
As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`.
AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization.
Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer.
Here is an example of providing this customizer bean.
```
@Bean
public AdminClientConfigCustomizer adminClientConfigCustomizer() {
return props -> {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
};
}
```
= Appendices
[appendix]
[[building]]

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
</parent>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>

View File

@@ -11,8 +11,43 @@ image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.sv
// ======================================================================================
//= Overview
include::overview.adoc[]
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
== Apache Kafka Streams Binder
=== Usage
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
= Appendices
[appendix]

View File

@@ -277,7 +277,7 @@ public Function<KTable<String, String>, KStream<String, String>> bar() {
===== Multiple Output Bindings
Kafka Streams allows to write outbound data into multiple topics. This feature is known as branching in Kafka Streams.
Kafka Streams allows writing outbound data into multiple topics. This feature is known as branching in Kafka Streams.
When using multiple output bindings, you need to provide an array of KStream (`KStream[]`) as the outbound return type.
Here is an example:
@@ -291,21 +291,30 @@ public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
}
----
The programming model remains the same, however the outbound parameterized type is `KStream[]`.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively for the function above.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array as three.
Note that in this example, we provide a `noDefaultBranch()`; if we have used `defaultBranch()` instead, that would have required an extra output binding, essentially returning a `KStream` array of length four.
===== Summary of Function based Programming Styles for Kafka Streams
@@ -421,213 +430,6 @@ public Function<KTable<String, String>, KStream<String, String>> bar() {
You can compose them as `foo|bar`, but keep in mind that the second function (`bar` in this case) must have a `KTable` as input since the first function (`foo`) has `KTable` as output.
==== Imperative programming model.
Starting with `3.1.0` version of the binder, we recommend using the functional programming model described above for Kafka Streams binder based applications.
The support for `StreamListener` is deprecated starting with `3.1.0` of Spring Cloud Stream.
Below, we are providing some details on the `StreamListener` based Kafka Streams processors as a reference.
Following is the equivalent of the Word count example using `StreamListener`.
[source]
----
@SpringBootApplication
@EnableBinding(KafkaStreamsProcessor.class)
public class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, WordCount> process(KStream<?, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-multi"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
----
As you can see, this is a bit more verbose since you need to provide `EnableBinding` and the other extra annotations like `StreamListener` and `SendTo` to make it a complete application.
`EnableBinding` is where you specify your binding interface that contains your bindings.
In this case, we are using the stock `KafkaStreamsProcessor` binding interface that has the following contracts.
[source]
----
public interface KafkaStreamsProcessor {
@Input("input")
KStream<?, ?> input();
@Output("output")
KStream<?, ?> output();
}
----
Binder will create bindings for the input `KStream` and output `KStream` since you are using a binding interface that contains those declarations.
In addition to the obvious differences in the programming model offered in the functional style, one particular thing that needs to be mentioned here is that the binding names are what you specify in the binding interface.
For example, in the above application, since we are using `KafkaStreamsProcessor`, the binding names are `input` and `output`.
Binding properties need to use those names. For instance `spring.cloud.stream.bindings.input.destination`, `spring.cloud.stream.bindings.output.destination` etc.
Keep in mind that this is fundamentally different from the functional style since there the binder generates binding names for the application.
This is because the application does not provide any binding interfaces in the functional model using `EnableBinding`.
Here is another example of a sink where we have two inputs.
[source]
----
@EnableBinding(KStreamKTableBinding.class)
.....
.....
@StreamListener
public void process(@Input("inputStream") KStream<String, PlayEvent> playEvents,
@Input("inputTable") KTable<Long, Song> songTable) {
....
....
}
interface KStreamKTableBinding {
@Input("inputStream")
KStream<?, ?> inputStream();
@Input("inputTable")
KTable<?, ?> inputTable();
}
----
Following is the `StreamListener` equivalent of the same `BiFunction` based processor that we saw above.
[source]
----
@EnableBinding(KStreamKTableBinding.class)
....
....
@StreamListener
@SendTo("output")
public KStream<String, Long> process(@Input("input") KStream<String, Long> userClicksStream,
@Input("inputTable") KTable<String, String> userRegionsTable) {
....
....
}
interface KStreamKTableBinding extends KafkaStreamsProcessor {
@Input("inputX")
KTable<?, ?> inputTable();
}
----
Finally, here is the `StreamListener` equivalent of the application with three inputs and curried functions.
[source]
----
@EnableBinding(CustomGlobalKTableProcessor.class)
...
...
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(
@Input("input-1") KStream<Long, Order> ordersStream,
@Input("input-2") GlobalKTable<Long, Customer> customers,
@Input("input-3") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
customers, (orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
interface CustomGlobalKTableProcessor {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
GlobalKTable<?, ?> input2();
@Input("input-3")
GlobalKTable<?, ?> input3();
@Output("output")
KStream<?, ?> output();
}
----
You might notice that the above two examples are even more verbose since in addition to provide `EnableBinding`, you also need to write your own custom binding interface as well.
Using the functional model, you can avoid all those ceremonial details.
Before we move on from looking at the general programming model offered by Kafka Streams binder, here is the `StreamListener` version of multiple output bindings.
[source]
----
EnableBinding(KStreamProcessorWithBranches.class)
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo({"output1","output2","output3"})
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(timeWindows)
.count(Materialized.as("WordCounts-1"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
interface KStreamProcessorWithBranches {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}
----
To recap, we have reviewed the various programming model choices when using the Kafka Streams binder.
The binder provides binding capabilities for `KStream`, `KTable` and `GlobalKTable` on the input.
`KTable` and `GlobalKTable` bindings are only available on the input.
Binder supports both input and output bindings for `KStream`.
The upshot of the programming model of Kafka Streams binder is that the binder provides you the flexibility of going with a fully functional programming model or using the `StreamListener` based imperative approach.
=== Ancillaries to the programming model
==== Multiple Kafka Streams processors within a single application
@@ -668,7 +470,7 @@ This is also true when you have a single Kafka Streams processor and other types
Application id is a mandatory property that you need to provide for a Kafka Streams application.
Spring Cloud Stream Kafka Streams binder allows you to configure this application id in multiple ways.
If you only have one single processor or `StreamListener` in the application, then you can set this at the binder level using the following property:
If you only have one single processor in the application, then you can set this at the binder level using the following property:
`spring.cloud.stream.kafka.streams.binder.applicationId`.
@@ -703,33 +505,6 @@ and
`spring.cloud.stream.kafka.streams.binder.functions.anotherProcess.applicationId`
In the case of `StreamListener`, you need to set this on the first input binding on the processor.
For e.g. imagine that you have the following two `StreamListener` based processors.
```
@StreamListener
@SendTo("output")
public KStream<String, String> process(@Input("input") <KStream<Object, String>> input) {
...
}
@StreamListener
@SendTo("anotherOutput")
public KStream<String, String> anotherProcess(@Input("anotherInput") <KStream<Object, String>> input) {
...
}
```
Then you must set the application id for this using the following binding property.
`spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId`
and
`spring.cloud.stream.kafka.streams.bindings.anotherInput.consumer.applicationId`
For function based model also, this approach of setting application id at the binding level will work.
However, setting per function at the binder level as we have seen above is much easier if you are using the functional model.
@@ -740,14 +515,12 @@ If the application does not provide an application ID, then in that case the bin
This is convenient in development scenarios as it avoids the need for explicitly providing the application ID.
The generated application ID in this manner will be static over application restarts.
In the case of functional model, the generated application ID will be the function bean name followed by the literal `applicationID`, for e.g `process-applicationID` if `process` if the function bean name.
In the case of `StreamListener`, instead of using the function bean name, the generated application ID will be use the containing class name followed by the method name followed by the literal `applicationId`.
====== Summary of setting Application ID
* By default, binder will auto generate the application ID per function or `StreamListener` methods.
* By default, binder will auto generate the application ID per function methods.
* If you have a single processor, then you can use `spring.kafka.streams.applicationId`, `spring.application.name` or `spring.cloud.stream.kafka.streams.binder.applicationId`.
* If you have multiple processors, then application ID can be set per function using the property - `spring.cloud.stream.kafka.streams.binder.functions.<function-name>.applicationId`.
In the case of `StreamListener`, this can be done using `spring.cloud.stream.kafka.streams.bindings.input.applicationId`, assuming that the input binding name is `input`.
==== Overriding the default binding names generated by the binder with the functional style
@@ -807,7 +580,7 @@ Keys are always deserialized using native Serdes.
For values, by default, deserialization on the inbound is natively performed by Kafka.
Please note that this is a major change on default behavior from previous versions of Kafka Streams binder where the deserialization was done by the framework.
Kafka Streams binder will try to infer matching `Serde` types by looking at the type signature of `java.util.function.Function|Consumer` or `StreamListener`.
Kafka Streams binder will try to infer matching `Serde` types by looking at the type signature of `java.util.function.Function|Consumer`.
Here is the order that it matches the Serdes.
* If the application provides a bean of type `Serde` and if the return type is parameterized with the actual type of the incoming key or value type, then it will use that `Serde` for inbound deserialization.
@@ -1007,7 +780,7 @@ It is always recommended to explicitly create a DLQ topic for each input binding
==== DLQ per input consumer binding
The property `spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler` is applicable for the entire application.
This implies that if there are multiple functions or `StreamListener` methods in the same application, this property is applied to all of them.
This implies that if there are multiple functions in the same application, this property is applied to all of them.
However, if you have multiple processors or multiple input bindings within a single processor, then you can use the finer-grained DLQ control that the binder provides per input consumer binding.
If you have the following processor,
@@ -1052,7 +825,7 @@ If you set a consumer binding's `dlqPartitions` property to a value greater than
A couple of things to keep in mind when using the exception handling feature in Kafka Streams binder.
* The property `spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler` is applicable for the entire application.
This implies that if there are multiple functions or `StreamListener` methods in the same application, this property is applied to all of them.
This implies that if there are multiple functions in the same application, this property is applied to all of them.
* The exception handling for deserialization works consistently with native deserialization and framework provided message conversion.
==== Handling Production Exceptions in the Binder
@@ -1713,8 +1486,9 @@ spring.cloud.stream.bindings.enrichOrder-out-0.binder=kafka1 #kstream
=== State Cleanup
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
By default, no local state is cleaned up when the binding is stopped.
This is the same behavior effective from Spring Kafka version 2.7.
See https://docs.spring.io/spring-kafka/reference/html/#streams-config[Spring Kafka documentation] for more details.
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
=== Kafka Streams topology visualization
@@ -1874,6 +1648,102 @@ When there are multiple bindings present on a single function, invoking these op
This is because all the bindings on a single function are backed by the same `StreamsBuilderFactoryBean`.
Therefore, for the function above, either `function-in-0` or `function-out-0` will work.
=== Manually starting Kafka Streams processors
Spring Cloud Stream Kafka Streams binder offers an abstraction called `StreamsBuilderFactoryManager` on top of the `StreamsBuilderFactoryBean` from Spring for Apache Kafka.
This manager API is used for controlling the multiple `StreamsBuilderFactoryBean` per processor in a binder based application.
Therefore, when using the binder, if you manually want to control the auto starting of the various `StreamsBuilderFactoryBean` objects in the application, you need to use `StreamsBuilderFactoryManager`.
You can use the property `spring.kafka.streams.auto-startup` and set this to `false` in order to turn off auto starting of the processors.
Then, in the application, you can use something as below to start the processors using `StreamsBuilderFactoryManager`.
```
@Bean
public ApplicationRunner runner(StreamsBuilderFactoryManager sbfm) {
return args -> {
sbfm.start();
};
}
```
This feature is handy, when you want your application to start in the main thread and let Kafka Streams processors start separately.
For example, when you have a large state store that needs to be restored, if the processors are started normally as is the default case, this may block your application to start.
If you are using some sort of liveness probe mechanism (for example on Kubernetes), it may think that the application is down and attempt a restart.
In order to correct this, you can set `spring.kafka.streams.auto-startup` to `false` and follow the approach above.
Keep in mind that, when using the Spring Cloud Stream binder, you are not directly dealing with `StreamsBuilderFactoryBean` from Spring for Apache Kafka, rather `StreamsBuilderFactoryManager`, as the `StreamsBuilderFactoryBean` objects are internally managed by the binder.
=== Manually starting Kafka Streams processors selectively
While the approach laid out above will unconditionally apply auto start `false` to all the Kafka Streams processors in the application through `StreamsBuilderFactoryManager`, it is often desirable that only individually selected Kafka Streams processors are not auto started.
For instance, let us assume that you have three different functions (processors) in your application and for one of the processors, you do not want to start it as part of the application startup.
Here is an example of such a situation.
```
@Bean
public Function<KStream<?, ?>, KStream<?, ?>> process1() {
}
@Bean
public Consumer<KStream<?, ?>> process2() {
}
@Bean
public BiFunction<KStream<?, ?>, KTable<?, ?>, KStream<?, ?>> process3() {
}
```
In this scenario above, if you set `spring.kafka.streams.auto-startup` to `false`, then none of the processors will auto start during the application startup.
In that case, you have to programmatically start them as described above by calling `start()` on the underlying `StreamsBuilderFactoryManager`.
However, if we have a use case to selectively disable only one processor, then you have to set `auto-startup` on the individual binding for that processor.
Let us assume that we don't want our `process3` function to auto start.
This is a `BiFunction` with two input bindings - `process3-in-0` and `process3-in-1`.
In order to avoid auto start for this processor, you can pick any of these input bindings and set `auto-startup` on them.
It does not matter which binding you pick; if you wish, you can set `auto-startup` to `false` on both of them, but one will be sufficient.
Because they share the same factory bean, you don't have to set autoStartup to false on both bindings, but it probably makes sense to do so, for clarity.
Here is the Spring Cloud Stream property that you can use to disable auto startup for this processor.
```
spring.cloud.stream.bindings.process3-in-0.consumer.auto-startup: false
```
or
```
spring.cloud.stream.bindings.process3-in-1.consumer.auto-startup: false
```
Then, you can manually start the processor either using the REST endpoint or using the `BindingsEndpoint` API as shown below.
For this, you need to ensure that you have the Spring Boot actuator dependency on the classpath.
```
curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/process3-in-0
```
or
```
@Autowired
BindingsEndpoint endpoint;
@Bean
public ApplicationRunner runner() {
return args -> {
endpoint.changeState("process3-in-0", State.STARTED);
};
}
```
See https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#binding_visualization_control[this section] from the reference docs for more details on this mechanism.
NOTE: When controlling the bindings by disabling `auto-startup` as described in this section, please note that this is only available for consumer bindings.
In other words, if you use the producer binding, `process3-out-0`, that does not have any effect in terms of disabling the auto starting of the processor, although this producer binding uses the same `StreamsBuilderFactoryBean` as the consumer bindings.
=== Tracing using Spring Cloud Sleuth
When Spring Cloud Sleuth is on the classpath of a Spring Cloud Stream Kafka Streams binder based application, both its consumer and producer are automatically instrumented with tracing information.
@@ -1998,7 +1868,7 @@ Default: `logAndFail`
applicationId::
Convenient way to set the application.id for the Kafka Streams application globally at the binder level.
If the application contains multiple functions or `StreamListener` methods, then the application id should be set differently.
If the application contains multiple functions, then the application id should be set differently.
See above where setting the application id is discussed in detail.
+
Default: application will generate a static application ID. See the application ID section for more details.
@@ -2062,7 +1932,7 @@ The following properties are available for Kafka Streams consumers and must be p
For convenience, if there are multiple input bindings and they all require a common value, that can be configured by using the prefix `spring.cloud.stream.kafka.streams.default.consumer.`.
applicationId::
Setting application.id per input binding. This is only preferred for `StreamListener` based processors, for function based processors see other approaches outlined above.
Setting application.id per input binding.
+
Default: See above.
@@ -2136,7 +2006,7 @@ In Kafka Streams, you can control of the number of threads a processor can creat
This, you can do using the various `configuration` options described above under binder, functions, producer or consumer level.
You can also use the `concurrency` property that core Spring Cloud Stream provides for this purpose.
When using this, you need to use it on the consumer.
When you have more than one input bindings either in a function or `StreamListener`, set this on the first input binding.
When you have more than one input binding, set this on the first input binding.
For e.g. when setting `spring.cloud.stream.bindings.process-in-0.consumer.concurrency`, it will be translated as `num.stream.threads` by the binder.
If you have multiple processors and one processor defines binding level concurrency, but not the others, those ones with no binding level concurrency will default back to the binder wide property specified through
`spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads`.

View File

@@ -40,7 +40,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
The binder currently uses the Apache Kafka `kafka-clients` version `3.1.0`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -324,6 +324,12 @@ When using a transactional binder, the offset of a recovered record (e.g. when r
Setting this property to `false` suppresses committing the offset of recovered record.
+
Default: true.
commonErrorHandlerBeanName::
`CommonErrorHandler` bean name to use per consumer binding.
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
@@ -358,8 +364,6 @@ Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.bat
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
@@ -654,41 +658,10 @@ See this https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/m
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
This is facilitated by managing the binding lifecycle as shown in **Binding visualization and control** in the Spring Cloud Stream documentation, using `State.PAUSED` and `State.RESUMED`.
To resume, you can use an `ApplicationListener` (or `@EventListener` method) to receive `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
@@ -852,6 +825,88 @@ public interface KafkaBindingRebalanceListener {
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[retry-and-dlq-processing]]
=== Retry and Dead Letter Processing
By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
There are situations where it is preferable to move this functionality to the listener container, such as:
* The aggregate of retries and delays will exceed the consumer's `max.poll.interval.ms` property, potentially causing a partition rebalance.
* You wish to publish the dead letter to a different Kafka cluster.
* You wish to add retry listeners to the error handler.
* ...
To configure moving this functionality from the binder to the container, define a `@Bean` of type `ListenerContainerWithDlqAndRetryCustomizer`.
This interface has the following methods:
====
[source, java]
----
/**
* Configure the container.
* @param container the container.
* @param destinationName the destination name.
* @param group the group.
* @param dlqDestinationResolver a destination resolver for the dead letter topic (if
* enableDlq).
* @param backOff the backOff using retry properties (if configured).
* @see #retryAndDlqInBinding(String, String)
*/
void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff);
/**
* Return false to move retries and DLQ from the binding to a customized error handler
* using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when
* configured via
* {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}.
* @param destinationName the destination name.
* @param group the group.
* @return true to disable retrie in the binding
*/
default boolean retryAndDlqInBinding(String destinationName, String group) {
return true;
}
----
====
The destination resolver and `BackOff` are created from the binding properties (if configured).
You can then use these to create a custom error handler and dead letter publisher; for example:
====
[source, java]
----
@Bean
ListenerContainerWithDlqAndRetryCustomizer cust(KafkaTemplate<?, ?> template) {
return new ListenerContainerWithDlqAndRetryCustomizer() {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff) {
if (destinationName.equals("topicWithLongTotalRetryConfig")) {
ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(template),
dlqDestinationResolver);
container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff));
}
}
@Override
public boolean retryAndDlqInBinding(String destinationName, String group) {
return !destinationName.contains("topicWithLongTotalRetryConfig");
}
};
}
----
====
Now, only a single retry delay needs to be greater than the consumer's `max.poll.interval.ms` property.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
@@ -883,3 +938,27 @@ public AdminClientConfigCustomizer adminClientConfigCustomizer() {
};
}
```
[[custom-kafka-binder-health-indicator]]
=== Custom Kafka Binder Health Indicator
Kafka binder activates a default health indicator when Spring Boot actuator is on the classpath.
This health indicator checks the health of the binder and any communication issues with the Kafka broker.
If an application wants to disable this default health check implementation and include a custom implementation, then it can provide an implementation for `KafkaBinderHealth` interface.
`KafkaBinderHealth` is a marker interface that extends from `HealthIndicator`.
In the custom implementation, it must provide an implementation for the `health()` method.
The custom implementation must be present in the application configuration as a bean.
When the binder discovers the custom implementation, it will use that instead of the default implementation.
Here is an example of such a custom implementation bean in the application.
```
@Bean
public KafkaBinderHealth kafkaBinderHealthIndicator() {
return new KafkaBinderHealth() {
@Override
public Health health() {
// custom implementation details.
}
};
}
```

View File

@@ -44,6 +44,8 @@ include::partitions.adoc[]
include::kafka-streams.adoc[]
include::tips.adoc[]
= Appendices
[appendix]
include::building.adoc[]

View File

@@ -0,0 +1,865 @@
== Tips, Tricks and Recipes
=== Simple DLQ with Kafka
==== Problem Statement
As a developer, I want to write a consumer application that processes records from a Kafka topic.
However, if some error occurs in processing, I don't want the application to stop completely.
Instead, I want to send the record in error to a DLT (Dead-Letter-Topic) and then continue processing new records.
==== Solution
The solution for this problem is to use the DLQ feature in Spring Cloud Stream.
For the purposes of this discussion, let us assume that the following is our processor function.
```
@Bean
public Consumer<byte[]> processData() {
return s -> {
throw new RuntimeException();
};
```
This is a very trivial function that throws an exception for all the records that it processes, but you can take this function and extend it to any other similar situations.
In order to send the records in error to a DLT, we need to provide the following configuration.
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
```
In order to activate DLQ, the application must provide a group name.
Anonymous consumers cannot use the DLQ facilities.
We also need to enable DLQ by setting the `enableDLQ` property on the Kafka consumer binding to `true`.
Finally, we can optionally provide the DLT name by providing the `dlqName` on Kafka consumer binding, which otherwise default to `input-topic-dlq.my-group.error` in this case.
Note that in the example consumer provided above, the type of the payload is `byte[]`.
By default, the DLQ producer in Kafka binder expects the payload of type `byte[]`.
If that is not the case, then we need to provide the configuration for proper serializer.
For example, let us re-write the consumer function as below:
```
@Bean
public Consumer<String> processData() {
return s -> {
throw new RuntimeException();
};
}
```
Now, we need to tell Spring Cloud Stream, how we want to serialize the data when writing to the DLT.
Here is the modified configuration for this scenario:
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
=== DLQ with Advanced Retry Options
==== Problem Statement
This is similar to the recipe above, but as a developer I would like to configure the way retries are handled.
==== Solution
If you followed the above recipe, then you get the default retry options built into the Kafka binder when the processing encounters an error.
By default, the binder retires for a maximum of 3 attempts with a one second initial delay, 2.0 multiplier with each back off with a max delay of 10 seconds.
You can change all these configurations as below:
```
spring.cloud.stream.bindings.processData-in-0.consumer.maxAtttempts
spring.cloud.stream.bindings.processData-in-0.consumer.backOffInitialInterval
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMultipler
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMaxInterval
```
If you want, you can also provide a list of retryable exceptions by providing a map of boolean values.
For example,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalStateException=true
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalArgumentException=false
```
By default, any exceptions not listed in the map above will be retried.
If that is not desired, then you can disable that by providing,
```
spring.cloud.stream.bindings.processData-in-0.consumer.defaultRetryable=false
```
You can also provide your own `RetryTemplate` and mark it as `@StreamRetryTemplate` which will be scanned and used by the binder.
This is useful when you want more sophisticated retry strategies and policies.
If you have multiple `@StreamRetryTemplate` beans, then you can specify which one your binding wants by using the property,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retry-template-name=<your-retry-template-bean-name>
```
=== Handling Deserialization errors with DLQ
==== Problem Statement
I have a processor that encounters a deserilzartion exception in Kafka consumer.
I would expect that the Spring Cloud Stream DLQ mechanism will catch that scenario, but it does not.
How can I handle this?
==== Solution
The normal DLQ mechanism offered by Spring Cloud Stream will not help when Kafka consumer throws an irrecoverable deserialization excepion.
This is because, this exception happens even before the consumer's `poll()` method returns.
Spring for Apache Kafka project offers some great ways to help the binder with this situation.
Let us explore those.
Assuming this is our function:
```
@Bean
public Consumer<String> functionName() {
return s -> {
System.out.println(s);
};
}
```
It is a trivial function that takes a `String` parameter.
We want to bypass the message converters provided by Spring Cloud Stream and want to use native deserializers instead.
In the case of `String` types, it does not make much sense, but for more complex types like AVRO etc. you have to rely on external deserializers and therefore want to delegate the conversion to Kafka.
Now when the consumer receives the data, let us assume that there is a bad record that causes a deserilziation errror, maybe someone passed an `Integer` instead of a `String` for example.
In that case, if you don't do something in the application, the excption will be propagated through the chain and your application will exit eventually.
In order to handle this, you can add a `ListenerContainerCustomizer` `@Bean` that configures a `SeekToCurrentErrorHandler`.
This `SeekToCurrentErrorHandler` is configured with a `DeadLetterPublishingRecoverer`.
We also need to configure an `ErrorHandlingDeserializer` for the consumer.
That sounds like a lot of complex things, but in reality, it boils down to these 3 beans in this case.
```
@Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
```
```
@Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
```
```
@Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
```
Let us analyze each of them.
The first one is the `ListenerContainerCustomizer` bean that takes a `SeekToCurrentErrorHandler`.
The container is now customized with that particular error handler.
You can learn more about container customization https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_advanced_consumer_configuration[here].
The second bean is the `SeekToCurrentErrorHandler` that is configured with a publishing to a `DLT`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#seek-to-current[here] for more details on `SeekToCurrentErrorHandler`.
The third bean is the `DeadLetterPublishingRecoverer` that is ultimately responsible for sending to the `DLT`.
By default, the `DLT` topic is named as the ORIGINAL_TOPIC_NAME.DLT.
You can change that though.
See the https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters[docs] for more details.
We also need to configure an https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handling-deserializer[ErrorHandlingDeserializer] through application config.
The `ErrorHandlingDeserializer` delegates to the actual deserializer.
In case of errors, it sets key/value of the record to be null and includes the raw bytes of the message.
It then sets the exception in a header and passes this record to the listener, which then calls the registered error handler.
Following is the configuration required:
```
spring.cloud.stream:
function:
definition: functionName
bindings:
functionName-in-0:
group: group-name
destination: input-topic
consumer:
use-native-decoding: true
kafka:
bindings:
functionName-in-0:
consumer:
enableDlq: true
dlqName: dlq-topic
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
configuration:
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
```
We are providing the `ErrorHandlingDeserializer` through the `configuration` property on the binding.
We are also indicating that the actual deserializer to delegate is the `StringDeserializer`.
Keep in mind that none of the dlq properties above are relevant for the discussions in this recipe.
They are purely meant for addressing any application level errors only.
=== Basic offset management in Kafka binder
==== Problem Statement
I want to write a Spring Cloud Stream Kafka consumer applicaiton and not sure about how it manages Kafka consumer offsets.
Can you exaplain?
==== Solution
We encourage you read the https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current/reference/html/spring-cloud-stream-binder-kafka.html#reset-offsets[docs] section on this to get a thorough understanding on it.
Here is it in a gist:
Kafka supports two types of offsets to start with by default - `earliest` and `latest`.
Their semantics are self-explanatory from their names.
Assuming you are running the consumer for the first time.
If you miss the group.id in your Spring Cloud Stream application, then it becomes an anonymous consumer.
Whenever, you have an anonymous consumer, in that case, Spring Cloud Stream application by default will start from the `latest` available offset in the topic partition.
On the other hand, if you explicitly specify a group.id, then by default, the Spring Cloud Stream application will start from the `earliest` available offset in the topic partiton.
In both cases above (consumers with explicit groups and anonymous groups), the starting offset can be switched around by using the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.startOffset` and setting it to either `earliest` or `latest`.
Now, assume that you already ran the consumer before and now starting it again.
In this case, the starting offset semantics in the above case do not apply as the consumer finds an already committed offset for the consumer group (In the case of an anonymous consumer, although the application does not provide a group.id, the binder will auto generate one for you).
It simply picks up from the last committed offset onward.
This is true, even when you have a `startOffset` value provided.
However, you can override the default behavior where the consumer starts from the last committed offset by using the `resetOffsets` property.
In order to do that, set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.resetOffsets` to `true` (which is `false` by default).
Then make sure you provide the `startOffset` value (either `earliest` or `latest`).
When you do that and then start the consumer application, each time you start, it starts as if this is starting for the first time and ignore any committed offsets for the partition.
=== Seeking to arbitrary offsets in Kafka
==== Problem Statement
Using Kafka binder, I know that it can set the offset to either `earliest` or `latest`, but I have a requirement to seek the offset to something in the middle, an arbitrary offset.
Is there a way to achieve this using Spring Cloud Stream Kafka biner?
==== Solution
Previously we saw how Kafka binder allows you to tackle basic offset management.
By default, the binder does not allow you to rewind to an arbitrary offset, at least through the mechanism we saw in that reipce.
However, there are some low-level strategies that the binder provides to achieve this use case.
Let's explore them.
First of all, when you want to reset to an arbitrary offset other than `earliest` or `latest`, make sure to leave the `resetOffsets` configuration to its defaults, which is `false`.
Then you have to provide a custom bean of type `KafkaBindingRebalanceListener`, which will be injected into all consumer bindings.
It is an interface that comes with a few default methods, but here is the method that we are interested in:
```
/**
* Invoked when partitions are initially assigned or after a rebalance. Applications
* might only want to perform seek operations on an initial assignment. While the
* 'initial' argument is true for each thread (when concurrency is greater than 1),
* implementations should keep track of exactly which partitions have been sought.
* There is a race in that a rebalance could occur during startup and so a topic/
* partition that has been sought on one thread may be re-assigned to another
* thread and you may not wish to re-seek it at that time.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment on the current thread.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions, boolean initial) {
// do nothing
}
```
Let us look at the details.
In essence, this method will be invoked each time during the initial assignment for a topic partition or after a rebalance.
For better illustration, let us assume that our topic is `foo` and it has 4 partitions.
Initially, we are only starting a single consumer in the group and this consumer will consume from all partitions.
When the consumer starts for the first time, all 4 partitions are getting initially assigned.
However, we do not want to start the partitions to consume at the defaults (`earliest` since we define a group), rather for each partition, we want them to consume after seeking to arbitrary offsets.
Imagine that you have a business case to consume from certain offsets as below.
```
Partition start offset
0 1000
1 2000
2 2000
3 1000
```
This could be achieved by implementing the above method as below.
```
@Override
public void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions, boolean initial) {
Map<TopicPartition, Long> topicPartitionOffset = new HashMap<>();
topicPartitionOffset.put(new TopicPartition("foo", 0), 1000L);
topicPartitionOffset.put(new TopicPartition("foo", 1), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 2), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 3), 1000L);
if (initial) {
partitions.forEach(tp -> {
if (topicPartitionOffset.containsKey(tp)) {
final Long offset = topicPartitionOffset.get(tp);
try {
consumer.seek(tp, offset);
}
catch (Exception e) {
// Handle excpetions carefully.
}
}
});
}
}
```
This is just a rudimentary implementation.
Real world use cases are much more complex than this and you need to adjust accordingly, but this certainly gives you a basic sketch.
When consumer `seek` fails, it may throw some runtime exceptions and you need to decide what to do in those cases.
==== What if we start a second consumer with the same group id?
When we add a second consumer, a rebalance will occur and some partitions will be moved around.
Let's say that the new consumer gets partitions `2` and `3`.
When this new Spring Cloud Stream consumer calls this `onPartitionsAssigned` method, it will see that this is the initial assignment for partititon `2` and `3` on this consumer.
Therefore, it will do the seek operation becuase of the conditional check on the `initial` argument.
In the case of the first consumer, it now only has partitons `0` and `1`
However, for this consumer it was simply a rebalance event and not considered as an intial assignment.
Thus, it will not re-seek to the given offsets because of the conditional check on the `initial` argument.
=== How do I manually acknowledge using Kafka binder?
==== Problem Statement
Using Kafka binder, I want to manually acknowledge messages in my consumer.
How do I do that?
==== Solution
By default, Kafka binder delegates to the default commit settings in Spring for Apache Kafka project.
The default `ackMode` in Spring Kafka is `batch`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets[here] for more details on that.
There are situations in which you want to disable this default commit behavior and rely on manual commits.
Following steps allow you to do that.
Set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.ackMode` to either `MANUAL` or `MANUAL_IMMEDIATE`.
When it is set like that, then there will be a header called `kafka_acknowledgment` (from `KafkaHeaders.ACKNOWLEDGMENT`) present in the message received by the consumer method.
For example, imagine this as your consumer method.
```
@Bean
public Consumer<Message<String>> myConsumer() {
return msg -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
};
}
```
Then you set the property `spring.cloud.stream.bindings.myConsumer-in-0.consumer.ackMode` to `MANUAL` or `MANUAL_IMMEDIATE`.
=== How do I override the default binding names in Spring Cloud Stream?
==== Problem Statement
Spring Cloud Stream creates default bindings based on the function definition and signature, but how do I override these to more domain friendly names?
==== Solution
Assume that following is your function signature.
```
@Bean
public Function<String, String> uppercase(){
...
}
```
By default, Spring Cloud Stream will create the bindings as below.
1. uppercase-in-0
2. uppercase-out-0
You can override these bindings to something by using the following properties.
```
spring.cloud.stream.function.bindings.uppercase-in-0=my-transformer-in
spring.cloud.stream.function.bindings.uppercase-out-0=my-transformer-out
```
After this, all binding properties must be made on the new names, `my-transformer-in` and `my-transformer-out`.
Here is another example with Kafka Streams and multiple inputs.
```
@Bean
public BiFunction<KStream<String, Order>, KTable<String, Account>, KStream<String, EnrichedOrder>> processOrder() {
...
}
```
By default, Spring Cloud Stream will create three different binding names for this function.
1. processOrder-in-0
2. processOrder-in-1
3. processOrder-out-0
You have to use these binding names each time you want to set some configuration on these bindings.
You don't like that, and you want to use more domain-friendly and readable binding names, for example, something like.
1. orders
2. accounts
3. enrichedOrders
You can easily do that by simply setting these three properties
1. spring.cloud.stream.function.bindings.processOrder-in-0=orders
2. spring.cloud.stream.function.bindings.processOrder-in-1=accounts
3. spring.cloud.stream.function.bindings.processOrder-out-0=enrichedOrders
Once you do that, it overrides the default binding names and any properties that you want to set on them must be on these new binding names.
=== How do I send a message key as part of my record?
==== Problem Statement
I need to send a key along with the payload of the record, is there a way to do that in Spring Cloud Stream?
==== Solution
It is often necessary that you want to send associative data structure like a map as the record with a key and value.
Spring Cloud Stream allows you to do that in a straightforward manner.
Following is a basic blueprint for doing this, but you may want to adapt it to your paricular use case.
Here is sample producer method (aka `Supplier`).
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
This is a trivial function that sends a message with a `String` payload, but also with a key.
Note that we set the key as a message header using `KafkaHeaders.MESSAGE_KEY`.
If you want to change the key from the default `kafka_messageKey`, then in the configuration, we need to specify this property:
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.messageKeyExpression=headers['my-special-key']
```
Please note that we use the binding name `supplier-out-0` since that is our function name, please update accordingly.
Then, we use this new key when we produce the message.
=== How do I use native serializer and deserializer instead of message conversion done by Spring Cloud Stream?
==== Problem Statement
Instead of using the message converters in Spring Cloud Stream, I want to use native Serializer and Deserializer in Kafka.
By default, Spring Cloud Stream takes care of this conversion using its internal built-in message converters.
How can I bypass this and delegate the responsibility to Kafka?
==== Solution
This is really easy to do.
All you have to do is to provide the following property to enable native serialization.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.useNativeEncoding: true
```
Then, you need to also set the serailzers.
There are a couple of ways to do this.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
or using the binder configuration.
```
spring.cloud.stream.kafka.binder.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.binder.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
When using the binder way, it is applied against all the bindings whereas setting them at the bindings are per binding.
On the deserializing side, you just need to provide the deserializers as configuration.
For example,
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.configurarion.key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
```
You can also set them at the binder level.
There is an optional property that you can set to force native decoding.
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.useNativeDecoding: true
```
However, in the case of Kafka binder, this is unncessary, as by the time it reaches the binder, Kafka already deserializes them using the configured deserializers.
=== Explain how offset resetting work in Kafka Streams binder
==== Problem Statement
By default, Kafka Streams binder always starts from the earliest offset for a new consumer.
Sometimes, it is beneficial or required by the application to start from the latest offset.
Kafka Streams binder allows you to do that.
==== Solution
Before we look at the solution, let us look at the following scenario.
```
@Bean
public BiConsumer<KStream<Object, Object>, KTable<Object, Object>> myBiConsumer{
(s, t) -> s.join(t, ...)
...
}
```
We have a `BiConsumer` bean that requires two input bindings.
In this case, the first binding is for a `KStream` and the second one is for a `KTable`.
When running this application for the first time, by default, both bindings start from the `earliest` offset.
What about I want to start from the `latest` offset due to some requirements?
You can do this by enabling the following properties.
```
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-0.consumer.startOffset: latest
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-1.consumer.startOffset: latest
```
If you want only one binding to start from the `latest` offset and the other to consumer from the default `earliest`, then leave the latter binding out from the configuration.
Keep in mind that, once there are committed offsets, these setting are *not* honored and the committed offsets take precedence.
=== Keeping track of successful sending of records (producing) in Kafka
==== Problem Statement
I have a Kafka producer application and I want to keep track of all my successful sedings.
==== Solution
Let us assume that we have this following supplier in the application.
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
Then, we need to define a new `MessageChannel` bean to capture all the successful send information.
```
@Bean
public MessageChannel fooRecordChannel() {
return new DirectChannel();
}
```
Next, define this property in the application configuration to provide the bean name for the `recordMetadataChannel`.
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.recordMetadataChannel: fooRecordChannel
```
At this point, successful sent information will be sent to the `fooRecordChannel`.
You can write an `IntegrationFlow` as below to see the information.
```
@Bean
public IntegrationFlow integrationFlow() {
return f -> f.channel("fooRecordChannel")
.handle((payload, messageHeaders) -> payload);
}
```
In the `handle` method, the payload is what got sent to Kafka and the message headers contain a special key called `kafka_recordMetadata`.
Its value is a `RecordMetadata` that contains information about topic partition, current offset etc.
=== Adding custom header mapper in Kafka
==== Problem Statement
I have a Kafka producer application that sets some headers, but they are missing in the consumer application. Why is that?
==== Solution
Under normal circumstances, this should be fine.
Imagine, you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("foo", "bar").build();
}
```
On the consumer side, you should still see the header "foo", and the following should not give you any issues.
```
@Bean
public Consumer<Message<String>> consume() {
return s -> {
final String foo = (String)s.getHeaders().get("foo");
System.out.println(foo);
};
}
```
If you provide a https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.3/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties[custom header mapper] in the application, then this won't work.
Let's say you have an empty `KafkaHeaderMapper` in the application.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
}
};
}
```
If that is your implementation, then you will miss the `foo` header on the consumer.
Chances are that, you may have some logic inside those `KafkaHeaderMapper` methods.
You need the following to populate the `foo` header.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String foo = (String) headers.get("foo");
target.add("foo", foo.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header foo = source.lastHeader("foo");
target.put("foo", new String(foo.value()));
}
}
```
That will properly populate the `foo` header from the producer to consumer.
==== Special note on the id header
In Spring Cloud Stream, the `id` header is a special header, but some applications may want to have special custom id headers - something like `custom-id` or `ID` or `Id`.
The first one (`custom-id`) will propagate without any custom header mapper from producer to consumer.
However, if you produce with a variant of the framework reserved `id` header - such as `ID`, `Id`, `iD` etc. then you will run into issues with the internals of the framework.
See this https://stackoverflow.com/questions/68412600/change-the-behaviour-in-spring-cloud-stream-make-header-matcher-case-sensitive[StackOverflow thread] fore more context on this use case.
In that case, you must use a custom `KafkaHeaderMapper` to map the case-sensitive id header.
For example, let's say you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("Id", "my-id").build();
}
```
The header `Id` above will be gone from the consuming side as it clashes with the framework `id` header.
You can provide a custom `KafkaHeaderMapper` to solve this issue.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper1() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String myId = (String) headers.get("Id");
target.add("Id", myId.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header Id = source.lastHeader("Id");
target.put("Id", new String(Id.value()));
}
};
}
```
By doing this, both `id` and `Id` headers will be available from the producer to the consumer side.
=== Producing to multiple topics in transaction
==== Problem Statement
How do I produce transactional messages to multiple Kafka topics?
For more context, see this https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow question].
==== Solution
Use transactional support in Kafka binder for transactions and then provide an `AfterRollbackProcessor`.
In order to produce to multiple topics, use `StreamBridge` API.
Below are the code snippets for this:
```
@Autowired
StreamBridge bridge;
@Bean
Consumer<String> input() {
return str -> {
System.out.println(str);
this.bridge.send("left", str.toUpperCase());
this.bridge.send("right", str.toLowerCase());
if (str.equals("Fail")) {
throw new RuntimeException("test");
}
};
}
@Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BinderFactory binders) {
return (container, dest, group) -> {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTemplate<byte[], byte[]> template = new KafkaTemplate<>(pf);
DefaultAfterRollbackProcessor rollbackProcessor = rollbackProcessor(template);
container.setAfterRollbackProcessor(rollbackProcessor);
};
}
DefaultAfterRollbackProcessor rollbackProcessor(KafkaTemplate<byte[], byte[]> template) {
return new DefaultAfterRollbackProcessor<>(
new DeadLetterPublishingRecoverer(template), new FixedBackOff(2000L, 2L), template, true);
}
```
==== Required Configuration
```
spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix: tx-
spring.cloud.stream.kafka.binder.required-acks=all
spring.cloud.stream.bindings.input-in-0.group=foo
spring.cloud.stream.bindings.input-in-0.destination=input
spring.cloud.stream.bindings.left.destination=left
spring.cloud.stream.bindings.right.destination=right
spring.cloud.stream.kafka.bindings.input-in-0.consumer.maxAttempts=1
```
in order to test, you can use the following:
```
@Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
System.in.read();
template.send("input", "Fail".getBytes());
template.send("input", "Good".getBytes());
};
}
```
Some important notes:
Please ensure that you don't have any DLQ settings on the application configuration as we manually configure DLT (By default it will be published to a topic named `input.DLT` based on the initial consumer function).
Also, reset the `maxAttempts` on consumer binding to `1` in order to avoid retries by the binder.
It will be max tried a total of 3 in the example above (initial try + the 2 attempts in the `FixedBackoff`).
See the https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow thread] for more details on how to test this code.
If you are using Spring Cloud Stream to test it by adding more consumer functions, make sure to set the `isolation-level` on the consumer binding to `read-committed`.
This https://stackoverflow.com/questions/68941306/spring-cloud-stream-database-transaction-does-not-roll-back[StackOverflow thread] is also related to this discussion.
=== Pitfalls to avoid when running multiple pollable consumers
==== Problem Statement
How can I run multiple instances of the pollable consumers and generate unique `client.id` for each instance?
==== Solution
Assuming that I have the following definition:
```
spring.cloud.stream.pollable-source: foo
spring.cloud.stream.bindings.foo-in-0.group: my-group
```
When running the application, the Kafka consumer generates a client.id (something like `consumer-my-group-1`).
For each instance of the application that is running, this `client.id` will be the same, causing unexpected issues.
In order to fix this, you can add the following property on each instance of the application:
```
spring.cloud.stream.kafka.bindings.foo-in-0.consumer.configuration.client.id=${client.id}
```
See this https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1139[GitHub issue] for more details.

197
pom.xml
View File

@@ -2,21 +2,29 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>3.0.5</version>
<version>4.0.0-SNAPSHOT</version>
<relativePath />
</parent>
<scm>
<url>https://github.com/spring-cloud/spring-cloud-stream-binder-kafka</url>
<connection>scm:git:git://github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</connection>
<developerConnection>
scm:git:ssh://git@github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</developerConnection>
<tag>HEAD</tag>
</scm>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.6.12</spring-kafka.version>
<spring-integration-kafka.version>5.4.12</spring-integration-kafka.version>
<kafka.version>2.6.3</kafka.version>
<spring-cloud-schema-registry.version>1.1.5</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.7-SNAPSHOT</spring-cloud-stream.version>
<java.version>17</java.version>
<spring-kafka.version>3.0.0-M1</spring-kafka.version>
<spring-integration-kafka.version>6.0.0-SNAPSHOT</spring-integration-kafka.version>
<kafka.version>3.0.0</kafka.version>
<spring-cloud-stream.version>4.0.0-SNAPSHOT</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
@@ -27,7 +35,7 @@
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
<module>docs</module>
</modules>
</modules>
<dependencyManagement>
<dependencies>
@@ -119,12 +127,6 @@
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<version>${spring-cloud-schema-registry.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
@@ -139,14 +141,10 @@
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
<!-- <version>1.7</version>-->
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
@@ -165,6 +163,16 @@
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<compilerArgument>-parameters</compilerArgument>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
@@ -175,74 +183,91 @@
<profiles>
<profile>
<id>spring</id>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
<profile>
<id>coverage</id>
<activation>
<property>
<name>env.TRAVIS</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>
<executions>
<execution>
<id>agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</pluginRepository>
</pluginRepositories>
<reporting>
<plugins>
<plugin>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -28,10 +28,9 @@ import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import jakarta.validation.constraints.AssertTrue;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;

View File

@@ -210,6 +210,12 @@ public class KafkaConsumerProperties {
*/
private boolean txCommitRecovered = true;
/**
* CommonErrorHandler bean name per consumer binding.
* @since 3.2
*/
private String commonErrorHandlerBeanName;
/**
* @return if each record needs to be acknowledged.
*
@@ -529,4 +535,11 @@ public class KafkaConsumerProperties {
this.txCommitRecovered = txCommitRecovered;
}
public String getCommonErrorHandlerBeanName() {
return commonErrorHandlerBeanName;
}
public void setCommonErrorHandlerBeanName(String commonErrorHandlerBeanName) {
this.commonErrorHandlerBeanName = commonErrorHandlerBeanName;
}
}

View File

@@ -19,7 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import javax.validation.constraints.NotNull;
import jakarta.validation.constraints.NotNull;
import org.springframework.expression.Expression;

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
</parent>
<properties>
@@ -73,40 +73,6 @@
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
</dependency>
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
<version>${avro.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<executions>
<execution>
<phase>generate-test-sources</phase>
<goals>
<goal>schema</goal>
</goals>
<configuration>
<outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>
<testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>
<testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@@ -164,6 +164,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
@@ -176,6 +178,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
}

View File

@@ -113,7 +113,9 @@ public class InteractiveQueryService {
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
while (iterator.hasNext()) {
try {
store = iterator.next().store(storeName, storeType);
store = iterator.next()
.store(StoreQueryParameters.fromNameAndType(
storeName, storeType));
}
catch (InvalidStateStoreException e) {
// pass through..
@@ -209,7 +211,7 @@ public class InteractiveQueryService {
throwable = e;
}
throw new IllegalStateException(
"Error when retrieving state store", throwable != null ? throwable : new Throwable("Kafka Streams is not ready."));
"Error when retrieving state store.", throwable != null ? throwable : new Throwable("Kafka Streams is not ready."));
});
}

View File

@@ -1,66 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
/**
* {@link StreamListenerParameterAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerParameterAdapter
implements StreamListenerParameterAdapter<KStream<?, ?>, KStream<?, ?>> {
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
KStreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
public boolean supports(Class bindingTargetType, MethodParameter methodParameter) {
return KafkaStreamsBinderUtils.supportsKStream(methodParameter, bindingTargetType);
}
@Override
@SuppressWarnings("unchecked")
public KStream adapt(KStream<?, ?> bindingTarget, MethodParameter parameter) {
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
if (this.KafkaStreamsBindingInformationCatalogue
.isUseNativeDecoding(bindingTarget)) {
return bindingTarget;
}
else {
return this.kafkaStreamsMessageConversionDelegate
.deserializeOnInbound(valueClass, bindingTarget);
}
}
}

View File

@@ -1,58 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.io.Closeable;
import java.io.IOException;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
/**
* {@link StreamListenerResultAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerResultAdapter implements
StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
return KStream.class.isAssignableFrom(resultType)
&& KStream.class.isAssignableFrom(boundElement);
}
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult,
KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult);
return new NoOpCloseable();
}
private static final class NoOpCloseable implements Closeable {
@Override
public void close() throws IOException {
}
}
}

View File

@@ -32,8 +32,8 @@ import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.apache.kafka.streams.TaskMetadata;
import org.apache.kafka.streams.ThreadMetadata;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
@@ -162,7 +162,7 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
}
if (isRunningResult) {
final Set<ThreadMetadata> threadMetadata = kafkaStreams.localThreadsMetadata();
final Set<ThreadMetadata> threadMetadata = kafkaStreams.metadataForLocalThreads();
for (ThreadMetadata metadata : threadMetadata) {
perAppdIdDetails.put("threadName", metadata.threadName());
perAppdIdDetails.put("threadState", metadata.threadState());

View File

@@ -17,7 +17,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Constructor;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@@ -50,10 +49,7 @@ import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binder.kafka.streams.serde.MessageConverterDelegateSerde;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
@@ -296,37 +292,6 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
}
@Bean
public KStreamStreamListenerResultAdapter kstreamStreamListenerResultAdapter() {
return new KStreamStreamListenerResultAdapter();
}
@Bean
public KStreamStreamListenerParameterAdapter kstreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(
kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
public KafkaStreamsStreamListenerSetupMethodOrchestrator kafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
ObjectProvider<CleanupConfig> cleanupConfig,
ObjectProvider<StreamsBuilderFactoryBeanConfigurer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
cleanupConfig.getIfUnique(), customizerProvider.getIfUnique(), environment);
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
@@ -338,20 +303,6 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public MessageConverterDelegateSerde messageConverterDelegateSerde(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new MessageConverterDelegateSerde(compositeMessageConverterFactory);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(
BindingServiceProperties bindingServiceProperties,
@@ -412,8 +363,8 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
@Nullable KafkaStreamsMicrometerListener listener) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
@Nullable KafkaStreamsMicrometerListener listener, KafkaProperties kafkaProperties) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener, kafkaProperties);
}
@Bean

View File

@@ -42,7 +42,7 @@ import org.springframework.util.CollectionUtils;
* A catalogue that provides binding information for Kafka Streams target types such as
* KStream. It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean}
* and {@link StreamsConfig} associated with various
* {@link org.springframework.cloud.stream.annotation.StreamListener} methods in the
* Kafka Streams functions in the
* {@link org.springframework.context.ApplicationContext}.
*
* @author Soby Chacko
@@ -55,6 +55,8 @@ public class KafkaStreamsBindingInformationCatalogue {
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> consumerPropertiesPerSbfb = new HashMap<>();
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
@@ -140,11 +142,19 @@ public class KafkaStreamsBindingInformationCatalogue {
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
}
void addConsumerPropertiesPerSbfb(StreamsBuilderFactoryBean streamsBuilderFactoryBean, ConsumerProperties consumerProperties) {
this.consumerPropertiesPerSbfb.computeIfAbsent(streamsBuilderFactoryBean, k -> new ArrayList<>());
this.consumerPropertiesPerSbfb.get(streamsBuilderFactoryBean).add(consumerProperties);
}
public Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> getConsumerPropertiesPerSbfb() {
return this.consumerPropertiesPerSbfb;
}
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
return this.streamsBuilderFactoryBeanPerBinding;
}
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
this.outboundKStreamResolvables.put(key, outboundResolvable);
}

View File

@@ -51,7 +51,6 @@ import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStrea
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.function.FunctionConstants;
@@ -529,6 +528,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =
@@ -560,7 +561,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
//throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;

View File

@@ -17,12 +17,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
@@ -37,9 +37,9 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
*/
public class KafkaStreamsRegistry {
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
private final Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new ConcurrentHashMap<>();
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
private final Set<KafkaStreams> kafkaStreams = ConcurrentHashMap.newKeySet();
Set<KafkaStreams> getKafkaStreams() {
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();

View File

@@ -1,517 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.binding.StreamListenerSetupMethodOrchestrator;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
import org.springframework.util.StringUtils;
/**
* Kafka Streams specific implementation for {@link StreamListenerSetupMethodOrchestrator}
* that overrides the default mechanisms for invoking StreamListener adapters.
* <p>
* The orchestration primarily focus on the following areas:
* <p>
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one
* output values on {@link SendTo} 2. Allow multiple inbound bindings for multiple KStream
* and or KTable/GlobalKTable types. 3. Each StreamListener method that it orchestrates
* gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
*
* @author Soby Chacko
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStreamsBinderProcessor
implements StreamListenerSetupMethodOrchestrator {
private static final Log LOG = LogFactory
.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private final StreamListenerParameterAdapter streamListenerParameterAdapter;
private final Collection<StreamListenerResultAdapter> streamListenerResultAdapters;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final Map<Method, List<String>> registeredStoresPerMethod = new HashMap<>();
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
StreamsBuilderFactoryBeanConfigurer customizer;
private final ConfigurableEnvironment environment;
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig,
StreamsBuilderFactoryBeanConfigurer customizer,
ConfigurableEnvironment environment) {
super(bindingServiceProperties, bindingInformationCatalogue, extendedBindingProperties, keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = listenerResultAdapters;
this.customizer = customizer;
this.environment = environment;
}
@Override
public boolean supports(Method method) {
return methodParameterSupports(method) && (methodReturnTypeSuppports(method)
|| Void.TYPE.equals(method.getReturnType()));
}
private boolean methodReturnTypeSuppports(Method method) {
Class<?> returnType = method.getReturnType();
if (returnType.equals(KStream.class) || (returnType.isArray()
&& returnType.getComponentType().equals(KStream.class))) {
return true;
}
return false;
}
private boolean methodParameterSupports(Method method) {
boolean supports = false;
for (int i = 0; i < method.getParameterCount(); i++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, i);
Class<?> parameterType = methodParameter.getParameterType();
if (parameterType.equals(KStream.class) || parameterType.equals(KTable.class)
|| parameterType.equals(GlobalKTable.class)) {
supports = true;
}
}
return supports;
}
@Override
@SuppressWarnings({"rawtypes", "unchecked"})
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener,
Method method, Object bean) {
String[] methodAnnotatedOutboundNames = getOutboundBindingTargetNames(method);
validateStreamListenerMethod(streamListener, method,
methodAnnotatedOutboundNames);
String methodAnnotatedInboundName = streamListener.value();
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method,
methodAnnotatedInboundName, this.applicationContext,
this.streamListenerParameterAdapter);
try {
ReflectionUtils.makeAccessible(method);
if (Void.TYPE.equals(method.getReturnType())) {
method.invoke(bean, adaptedInboundArguments);
}
else {
Object result = method.invoke(bean, adaptedInboundArguments);
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
if (result.getClass().isArray()) {
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
}
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
methodAnnotatedInboundName = populateInboundIfMissing(method, methodAnnotatedInboundName);
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(methodAnnotatedInboundName);
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
final String methodAnnotatedOutboundName = methodAnnotatedOutboundNames[i++];
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(
methodAnnotatedOutboundName, streamsBuilderFactoryBean);
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundName);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
adaptStreamListenerResult(outboundKStream, targetBean);
}
}
else {
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(
methodAnnotatedOutboundNames[0], streamsBuilderFactoryBean);
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
adaptStreamListenerResult(result, targetBean);
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException(
"Cannot setup StreamListener for " + method, ex);
}
}
private String populateInboundIfMissing(Method method, String methodAnnotatedInboundName) {
if (!StringUtils.hasText(methodAnnotatedInboundName)) {
Object[] arguments = new Object[method.getParameterTypes().length];
if (arguments.length > 0) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, 0);
if (methodParameter.hasParameterAnnotation(Input.class)) {
Input methodAnnotation = methodParameter
.getParameterAnnotation(Input.class);
methodAnnotatedInboundName = methodAnnotation.value();
}
}
}
return methodAnnotatedInboundName;
}
@SuppressWarnings("unchecked")
private void adaptStreamListenerResult(Object outboundKStream, Object targetBean) {
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
}
@Override
@SuppressWarnings({"unchecked"})
public Object[] adaptAndRetrieveInboundArguments(Method method, String inboundName,
ApplicationContext applicationContext,
StreamListenerParameterAdapter... adapters) {
Object[] arguments = new Object[method.getParameterTypes().length];
for (int parameterIndex = 0; parameterIndex < arguments.length; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Class<?> parameterType = methodParameter.getParameterType();
Object targetReferenceValue = null;
if (methodParameter.hasParameterAnnotation(Input.class)) {
targetReferenceValue = AnnotationUtils
.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter
.getParameterAnnotation(Input.class);
inboundName = methodAnnotation.value();
}
else if (arguments.length == 1 && StringUtils.hasText(inboundName)) {
targetReferenceValue = inboundName;
}
if (targetReferenceValue != null) {
Assert.isInstanceOf(String.class, targetReferenceValue,
"Annotation value must be a String");
Object targetBean = applicationContext
.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
// Retrieve the StreamsConfig created for this method if available.
// Otherwise, create the StreamsBuilderFactory and get the underlying
// config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(method.getDeclaringClass().getSimpleName() + "-" + method.getName(),
applicationContext,
inboundName, null, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
.get(method);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
extendedConsumerProperties.setApplicationId(applicationId);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter));
LOG.info("Key Serde used for " + targetReferenceValue + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding() ?
getValueSerde(inboundName, extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + targetReferenceValue + ": " + valueSerde.getClass().getName());
Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(inboundName, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec,
bindingProperties, extendedConsumerProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset, parameterIndex == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde(stream, keySerde);
BindingProperties bindingProperties1 = this.kafkaStreamsBindingInformationCatalogue.getBindingProperties().get(kStreamWrapper);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(inboundName, streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(stream, methodParameter);
break;
}
}
if (arguments[parameterIndex] == null
&& parameterType.isAssignableFrom(stream.getClass())) {
arguments[parameterIndex] = stream;
}
Assert.notNull(arguments[parameterIndex],
"Cannot convert argument " + parameterIndex + " of "
+ method + "from " + stream.getClass() + " to "
+ parameterType);
}
else {
handleKTableGlobalKTableInputs(arguments, parameterIndex, inboundName, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset, parameterIndex == 0);
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
else {
throw new IllegalStateException(
StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
try {
Serde<?> keySerde = this.keyValueSerdeResolver
.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver
.getStateStoreValueSerde(spec.getValueSerdeString());
StoreBuilder builder;
switch (spec.getType()) {
case KEYVALUE:
builder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(spec.getName()), keySerde,
valueSerde);
break;
case WINDOW:
builder = Stores
.windowStoreBuilder(
Stores.persistentWindowStore(spec.getName(),
spec.getRetention(), 3, spec.getLength(), false),
keySerde, valueSerde);
break;
case SESSION:
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(
spec.getName(), spec.getRetention()), keySerde, valueSerde);
break;
default:
throw new UnsupportedOperationException(
"state store type (" + spec.getType() + ") is not supported!");
}
if (spec.isCacheEnabled()) {
builder = builder.withCachingEnabled();
}
if (spec.isLoggingDisabled()) {
builder = builder.withLoggingDisabled();
}
return builder;
}
catch (Exception ex) {
LOG.error("failed to build state store exception : " + ex);
throw ex;
}
}
private KStream<?, ?> getkStream(String inboundName,
KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties,
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde,
Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (storeSpec != null) {
StoreBuilder storeBuilder = buildStateStore(storeSpec);
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
}
return getKStream(inboundName, bindingProperties, kafkaStreamsConsumerProperties, streamsBuilder,
keySerde, valueSerde, autoOffsetReset, firstBuild);
}
private void validateStreamListenerMethod(StreamListener streamListener,
Method method, String[] methodAnnotatedOutboundNames) {
String methodAnnotatedInboundName = streamListener.value();
if (methodAnnotatedOutboundNames != null) {
for (String s : methodAnnotatedOutboundNames) {
if (StringUtils.hasText(s)) {
Assert.isTrue(isDeclarativeOutput(method, s),
"Method must be declarative");
}
}
}
if (StringUtils.hasText(methodAnnotatedInboundName)) {
int methodArgumentsLength = method.getParameterTypes().length;
for (int parameterIndex = 0; parameterIndex < methodArgumentsLength; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Assert.isTrue(
isDeclarativeInput(methodAnnotatedInboundName, methodParameter),
"Method must be declarative");
}
}
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeOutput(Method m, String targetBeanName) {
boolean declarative;
Class<?> returnType = m.getReturnType();
if (returnType.isArray()) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch((slpa) -> slpa.supports(returnType.getComponentType(),
targetBeanClass));
return declarative;
}
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch((slpa) -> slpa.supports(returnType, targetBeanClass));
return declarative;
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeInput(String targetBeanName,
MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class)
&& this.applicationContext.containsBean(targetBeanName)) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
if (targetBeanClass != null) {
boolean supports = KafkaStreamsBinderUtils.supportsKStream(methodParameter, targetBeanClass);
if (!supports) {
supports = KTable.class.isAssignableFrom(targetBeanClass)
&& KTable.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = GlobalKTable.class.isAssignableFrom(targetBeanClass)
&& GlobalKTable.class.isAssignableFrom(methodParameter.getParameterType());
}
}
return supports;
}
}
return false;
}
private static String[] getOutboundBindingTargetNames(Method method) {
SendTo sendTo = AnnotationUtils.findAnnotation(method, SendTo.class);
if (sendTo != null) {
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()),
StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1,
"At least one outbound destination need to be provided.");
return sendTo.value();
}
return null;
}
@SuppressWarnings({"unchecked"})
private KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
if (!this.registeredStoresPerMethod.containsKey(method)) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method,
KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
this.registeredStoresPerMethod.put(method, new ArrayList<>());
this.registeredStoresPerMethod.get(method).add(spec.name());
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
}
}
return null;
}
}

View File

@@ -16,9 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.context.SmartLifecycle;
import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
@@ -33,12 +39,11 @@ import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
* This {@link SmartLifecycle} class ensures that the bean created from it is started very
* late through the bootstrap process by setting the phase value closer to
* Integer.MAX_VALUE. This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* {@link org.springframework.cloud.stream.annotation.StreamListener} method with multiple
* bindings is only started after all the binding phases have completed successfully.
* function with multiple bindings is only started after all the binding phases have completed successfully.
*
* @author Soby Chacko
*/
class StreamsBuilderFactoryManager implements SmartLifecycle {
public class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
@@ -50,19 +55,23 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
private volatile boolean running;
private final KafkaProperties kafkaProperties;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener) {
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener,
KafkaProperties kafkaProperties) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
this.listener = listener;
this.kafkaProperties = kafkaProperties;
}
@Override
public boolean isAutoStartup() {
return true;
return this.kafkaProperties == null || this.kafkaProperties.getStreams().isAutoStartup();
}
@Override
@@ -79,13 +88,24 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
if (this.listener != null) {
streamsBuilderFactoryBean.addListener(this.listener);
}
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
// By default, we shut down the client if there is an uncaught exception in the application.
// Users can override this by customizing SBFB. See this issue for more details:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
// Starting the stream.
final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> bindingServicePropertiesPerSbfb =
this.kafkaStreamsBindingInformationCatalogue.getConsumerPropertiesPerSbfb();
final List<ConsumerProperties> consumerProperties = bindingServicePropertiesPerSbfb.get(streamsBuilderFactoryBean);
final boolean autoStartupDisabledOnAtLeastOneConsumerBinding = consumerProperties.stream().anyMatch(consumerProperties1 -> !consumerProperties1.isAutoStartup());
if (!autoStartupDisabledOnAtLeastOneConsumerBinding) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);

View File

@@ -1,90 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
/**
* Bindable interface for {@link KStream} input and output.
*
* This interface can be used as a bindable interface with
* {@link org.springframework.cloud.stream.annotation.EnableBinding} when both input and
* output types are single KStream. In other scenarios where multiple types are required,
* other similar bindable interfaces can be created and used. For example, there are cases
* in which multiple KStreams are required on the outbound in the case of KStream
* branching or multiple input types are required either in the form of multiple KStreams
* and a combination of KStreams and KTables. In those cases, new bindable interfaces
* compatible with the requirements must be created. Here are some examples.
*
* <pre class="code">
* interface KStreamBranchProcessor {
* &#064;Input("input")
* KStream&lt;?, ?&gt; input();
*
* &#064;Output("output-1")
* KStream&lt;?, ?&gt; output1();
*
* &#064;Output("output-2")
* KStream&lt;?, ?&gt; output2();
*
* &#064;Output("output-3")
* KStream&lt;?, ?&gt; output3();
*
* ......
*
* }
*</pre>
*
* <pre class="code">
* interface KStreamKtableProcessor {
* &#064;Input("input-1")
* KStream&lt;?, ?&gt; input1();
*
* &#064;Input("input-2")
* KTable&lt;?, ?&gt; input2();
*
* &#064;Output("output")
* KStream&lt;?, ?&gt; output();
*
* ......
*
* }
*</pre>
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
public interface KafkaStreamsProcessor {
/**
* Input binding.
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input")
KStream<?, ?> input();
/**
* Output binding.
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
}

View File

@@ -1,115 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
/**
* Interface for Kafka Stream state store.
*
* This interface can be used to inject a state store specification into KStream building
* process so that the desired store can be built by StreamBuilder and added to topology
* for later use by processors. This is particularly useful when need to combine stream
* DSL with low level processor APIs. In those cases, if a writable state store is desired
* in processors, it needs to be created using this annotation. Here is the example.
*
* <pre class="code">
* &#064;StreamListener("input")
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW,
* size=300000)
* public void process(KStream&lt;Object, Product&gt; input) {
* ......
* }
* </pre>
*
* With that, you should be able to read/write this state store in your
* processor/transformer code.
*
* <pre class="code">
* new Processor&lt;Object, Product&gt;() {
* WindowStore&lt;Object, String&gt; state;
* &#064;Override
* public void init(ProcessorContext processorContext) {
* state = (WindowStore)processorContext.getStateStore("mystate");
* ......
* }
* }
* </pre>
*
* @author Lei Chen
*/
@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
public @interface KafkaStreamsStateStore {
/**
* Provides name of the state store.
* @return name of state store.
*/
String name() default "";
/**
* State store type.
* @return {@link KafkaStreamsStateStoreProperties.StoreType} of state store.
*/
KafkaStreamsStateStoreProperties.StoreType type() default KafkaStreamsStateStoreProperties.StoreType.KEYVALUE;
/**
* Serde used for key.
* @return key serde of state store.
*/
String keySerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Serde used for value.
* @return value serde of state store.
*/
String valueSerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Length in milli-second of Windowed store window.
* @return length in milli-second of window(for windowed store).
*/
long lengthMs() default 0;
/**
* Retention period for Windowed store windows.
* @return the maximum period of time in milli-second to keep each window in this
* store(for windowed store).
*/
long retentionMs() default 0;
/**
* Whether catching is enabled or not.
* @return whether caching should be enabled on the created store.
*/
boolean cache() default false;
/**
* Whether logging is enabled or not.
* @return whether logging should be enabled on the created store.
*/
boolean logging() default true;
}

View File

@@ -1,161 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.properties;
/**
* Properties for Kafka Streams state store.
*
* @author Lei Chen
*/
public class KafkaStreamsStateStoreProperties {
/**
* Enumeration for store type.
*/
public enum StoreType {
/**
* Key value store.
*/
KEYVALUE("keyvalue"),
/**
* Window store.
*/
WINDOW("window"),
/**
* Session store.
*/
SESSION("session");
private final String type;
StoreType(final String type) {
this.type = type;
}
@Override
public String toString() {
return this.type;
}
}
/**
* Name for this state store.
*/
private String name;
/**
* Type for this state store.
*/
private StoreType type;
/**
* Size/length of this state store in ms. Only applicable for window store.
*/
private long length;
/**
* Retention period for this state store in ms.
*/
private long retention;
/**
* Key serde class specified per state store.
*/
private String keySerdeString;
/**
* Value serde class specified per state store.
*/
private String valueSerdeString;
/**
* Whether caching is enabled on this state store.
*/
private boolean cacheEnabled;
/**
* Whether logging is enabled on this state store.
*/
private boolean loggingDisabled;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public StoreType getType() {
return this.type;
}
public void setType(StoreType type) {
this.type = type;
}
public long getLength() {
return this.length;
}
public void setLength(long length) {
this.length = length;
}
public long getRetention() {
return this.retention;
}
public void setRetention(long retention) {
this.retention = retention;
}
public String getKeySerdeString() {
return this.keySerdeString;
}
public void setKeySerdeString(String keySerdeString) {
this.keySerdeString = keySerdeString;
}
public String getValueSerdeString() {
return this.valueSerdeString;
}
public void setValueSerdeString(String valueSerdeString) {
this.valueSerdeString = valueSerdeString;
}
public boolean isCacheEnabled() {
return this.cacheEnabled;
}
public void setCacheEnabled(boolean cacheEnabled) {
this.cacheEnabled = cacheEnabled;
}
public boolean isLoggingDisabled() {
return this.loggingDisabled;
}
public void setLoggingDisabled(boolean loggingDisabled) {
this.loggingDisabled = loggingDisabled;
}
}

View File

@@ -1,37 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import org.springframework.messaging.converter.CompositeMessageConverter;
/**
* This class provides the same functionality as {@link MessageConverterDelegateSerde} and is deprecated.
* It is kept for backward compatibility reasons and will be removed in version 3.1
*
* @author Soby Chacko
* @since 2.1
*
* @deprecated in favor of {@link MessageConverterDelegateSerde}
*/
@Deprecated
public class CompositeNonNativeSerde extends MessageConverterDelegateSerde {
public CompositeNonNativeSerde(CompositeMessageConverter compositeMessageConverter) {
super(compositeMessageConverter);
}
}

View File

@@ -1,226 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverter}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the
* {@link MessageConverterDelegateSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 3.0
*/
public class MessageConverterDelegateSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final MessageConverterDelegateDeserializer<T> messageConverterDelegateDeserializer;
private final MessageConverterDelegateSerializer<T> messageConverterDelegateSerializer;
public MessageConverterDelegateSerde(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverterDelegateDeserializer = new MessageConverterDelegateDeserializer<>(
compositeMessageConverter);
this.messageConverterDelegateSerializer = new MessageConverterDelegateSerializer<>(
compositeMessageConverter);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.messageConverterDelegateDeserializer.configure(configs, isKey);
this.messageConverterDelegateSerializer.configure(configs, isKey);
}
@Override
public void close() {
// No-op
}
@Override
public Serializer<T> serializer() {
return this.messageConverterDelegateSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.messageConverterDelegateDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class MessageConverterDelegateDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
MessageConverterDelegateDeserializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class MessageConverterDelegateSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
MessageConverterDelegateSerializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
// No-op
}
}
}

View File

@@ -224,6 +224,7 @@ public class KafkaStreamsFunctionCompositionTests {
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.applicationId=my-app-id",
"--spring.cloud.stream.function.definition=fooBiFunc|anotherFooFunc|yetAnotherFooFunc|lastFunctionInChain",
"--spring.cloud.stream.function.bindings.fooBiFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-0=input1",
"--spring.cloud.stream.function.bindings.fooBiFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-1=input2",
@@ -266,6 +267,7 @@ public class KafkaStreamsFunctionCompositionTests {
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.applicationId=my-app-id-xyz",
"--spring.cloud.stream.function.definition=curriedFunc|anotherFooFunc|yetAnotherFooFunc|lastFunctionInChain",
"--spring.cloud.stream.function.bindings.curriedFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-0=input1",
"--spring.cloud.stream.function.bindings.curriedFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-1=input2",

View File

@@ -19,6 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -29,10 +30,11 @@ import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
@@ -40,7 +42,6 @@ import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.mockito.Mockito;
@@ -48,13 +49,11 @@ import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -62,9 +61,9 @@ import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.mockito.internal.verification.VerificationModeFactory.times;
/**
@@ -124,7 +123,8 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
Mockito.verify(mockKafkaStreams, times(3))
.store(StoreQueryParameters.fromNameAndType("foo", storeType));
}
@Test
@@ -147,22 +147,23 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
final StringSerializer serializer = new StringSerializer();
try {
interactiveQueryService.getHostInfo("foo", "fooKey", serializer);
interactiveQueryService.getHostInfo("foo", "foobarApp-key", serializer);
}
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3))
.queryMetadataForKey("foo", "fooKey", serializer);
.queryMetadataForKey("foo", "foobarApp-key", serializer);
}
@Test
@Ignore
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
public void testKstreamBinderWithPojoInputAndStringOuput() {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -223,29 +224,27 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
assertThat(hostInfo.host() + ":" + hostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfoFoo = interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
assertThat(hostInfoFoo).isNull();
assertThatThrownBy(() -> interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer()))
.isInstanceOf(IllegalStateException.class)
.hasMessageContaining("Error when retrieving state store.");
final List<HostInfo> hostInfos = interactiveQueryService.getAllHostsInfo("prod-id-count-store");
assertThat(hostInfos.size()).isEqualTo(1);
final HostInfo hostInfo1 = hostInfos.get(0);
assertThat(hostInfo1.host() + ":" + hostInfo1.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, Product> input) {
@Bean
public Function<KStream<Object, Product>, KStream<?, String>> process() {
return input.filter((key, product) -> product.getId() == 123)
return input -> input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(),
.groupByKey(Grouped.with(new Serdes.IntegerSerde(),
new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(null,
@@ -257,6 +256,11 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
return new Foo(interactiveQueryService);
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
static class Foo {
InteractiveQueryService interactiveQueryService;

View File

@@ -31,7 +31,6 @@ import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
@@ -39,7 +38,6 @@ import org.springframework.cloud.stream.binder.kafka.streams.KeyValueSerdeResolv
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
@@ -143,18 +141,19 @@ public class KafkaStreamsBinderBootstrapTest {
assertThat(streamsConfiguration3.containsKey("spring.json.value.type.method")).isFalse();
applicationContext.getBean(KeyValueSerdeResolver.class);
String configuredSerdeTypeResolver = (String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeResolver.arg$2");
assertThat(this.getClass().getName() + ".determineType").isEqualTo(configuredSerdeTypeResolver);
String configuredKeyDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].keyDeserializer.typeMapper.classIdFieldName"));
assertThat(DefaultJackson2JavaTypeMapper.KEY_DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredKeyDeserializerFieldName);
String configuredValueDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeMapper.classIdFieldName"));
assertThat(DefaultJackson2JavaTypeMapper.DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredValueDeserializerFieldName);
//TODO: In Kafka Streams 3.1, taskTopology field is removed. Re-evaluate this testing strategy.
// String configuredSerdeTypeResolver = (String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
// .getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeResolver.arg$2");
//
// assertThat(this.getClass().getName() + ".determineType").isEqualTo(configuredSerdeTypeResolver);
//
// String configuredKeyDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
// .getPropertyValue("taskTopology.processorNodes[0].keyDeserializer.typeMapper.classIdFieldName"));
// assertThat(DefaultJackson2JavaTypeMapper.KEY_DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredKeyDeserializerFieldName);
//
// String configuredValueDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
// .getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeMapper.classIdFieldName"));
// assertThat(DefaultJackson2JavaTypeMapper.DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredValueDeserializerFieldName);
applicationContext.close();
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -48,6 +49,9 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
@@ -179,22 +183,30 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings("unchecked")
@SuppressWarnings({"unchecked"})
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
}
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
@@ -51,6 +52,7 @@ import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.StreamsBuilderFactoryManager;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
@@ -73,7 +75,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1", "counts-2");
"counts", "counts-1", "counts-2", "counts-5", "counts-6");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@@ -89,7 +91,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2", "counts-5", "counts-6");
}
@AfterClass
@@ -176,22 +178,23 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
@Test
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.bindings.process-in-0.destination=words-5",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-5",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-5", "counts-5");
}
}
@@ -203,6 +206,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunctionWithCustomProducerStreamPartitioner",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
@@ -234,6 +238,90 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@Test
public void testKstreamBinderAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.kafka.streams.auto-startup=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-3",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-3",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryManager streamsBuilderFactoryManager = context.getBean(StreamsBuilderFactoryManager.class);
assertThat(streamsBuilderFactoryManager.isAutoStartup()).isFalse();
assertThat(streamsBuilderFactoryManager.isRunning()).isFalse();
}
}
@Test
public void testKstreamIndividualBindingAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-4",
"--spring.cloud.stream.bindings.process-in-0.consumer.auto-startup=false",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-4",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean.isRunning()).isFalse();
streamsBuilderFactoryBean.start();
assertThat(streamsBuilderFactoryBean.isRunning()).isTrue();
}
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-6",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-6",
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-6");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-6");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
@@ -312,7 +400,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
@@ -341,4 +429,20 @@ public class KafkaStreamsBinderWordCountFunctionTests {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
@EnableAutoConfiguration
static class OutboundNullApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
return input -> input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foobar-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -41,7 +41,13 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -70,7 +76,7 @@ public class StreamToGlobalKTableFunctionTests {
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.function.bindings.process-in-0=order",
@@ -89,7 +95,44 @@ public class StreamToGlobalKTableFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-2.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("process-in-0");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-2");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,

View File

@@ -16,10 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
@@ -38,17 +40,28 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.StreamJoined;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -173,9 +186,8 @@ public class StreamToTableJoinFunctionTests {
}
}
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
@@ -187,6 +199,8 @@ public class StreamToTableJoinFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
@@ -256,13 +270,37 @@ public class StreamToTableJoinFunctionTests {
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("process-out-0");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
}
finally {
consumer.close();
}
}
@Test
// @Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -398,6 +436,34 @@ public class StreamToTableJoinFunctionTests {
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
/**
* Tuple for a region and its associated number of clicks.
*/
@@ -439,9 +505,14 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.toStream()));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -456,9 +527,14 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.toStream());
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -475,4 +551,29 @@ public class StreamToTableJoinFunctionTests {
}
}
@EnableAutoConfiguration
public static class TrivialKTableApp {
public java.util.function.Consumer<KTable<String, String>> process() {
return inputTable -> inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
@EnableAutoConfiguration
public static class JoinProcessor {
public BiConsumer<KStream<String, String>, KStream<String, String>> testProcessor() {
return (input1Stream, input2Stream) -> input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(Duration.ofMillis(5)),
StreamJoined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -1,270 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.mock.mockito.SpyBean;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.verify;
/**
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@ContextConfiguration
@DirtiesContext
public abstract class DeserializationErrorHandlerByKafkaTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"abc-DeserializationErrorHandlerByKafkaTests-In",
"xyz-DeserializationErrorHandlerByKafkaTests-In",
"DeserializationErrorHandlerByKafkaTests-out",
"error.abc-DeserializationErrorHandlerByKafkaTests-In.group",
"error.xyz-DeserializationErrorHandlerByKafkaTests-In.group",
"error.word1.groupx",
"error.word2.groupx");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("fooc", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "DeserializationErrorHandlerByKafkaTests-out", "DeserializationErrorHandlerByKafkaTests-out");
}
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=abc-DeserializationErrorHandlerByKafkaTests-In",
"spring.cloud.stream.bindings.output.destination=DeserializationErrorHandlerByKafkaTests-Out",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByKafkaAndDlqTests
extends DeserializationErrorHandlerByKafkaTests {
@Test
@Ignore
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("abc-DeserializationErrorHandlerByKafkaTests-In");
template.sendDefault(1, null, "foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.abc-DeserializationErrorHandlerByKafkaTests-In.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.abc-DeserializationErrorHandlerByKafkaTests-In.group");
assertThat(cr.value()).isEqualTo("foobar");
assertThat(cr.partition()).isEqualTo(0); // custom partition function
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=xyz-DeserializationErrorHandlerByKafkaTests-In",
"spring.cloud.stream.bindings.output.destination=DeserializationErrorHandlerByKafkaTests-Out",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByKafkaAndDlqPerBindingTests
extends DeserializationErrorHandlerByKafkaTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("xyz-DeserializationErrorHandlerByKafkaTests-In");
template.sendDefault(1, null, "foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.xyz-DeserializationErrorHandlerByKafkaTests-In.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.xyz-DeserializationErrorHandlerByKafkaTests-In.group");
assertThat(cr.value()).isEqualTo("foobar");
assertThat(cr.partition()).isEqualTo(0); // custom partition function
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=word1,word2",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq-multi-input",
"spring.cloud.stream.bindings.input.group=groupx",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTestsWithMultipleInputs
extends DeserializationErrorHandlerByKafkaTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("word1");
template.sendDefault("foobar");
template.setDefaultTopic("word2");
template.sendDefault("foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobarx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.word1.groupx",
"error.word2.groupx");
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word1.groupx");
assertThat(cr1.value()).isEqualTo("foobar");
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word2.groupx");
assertThat(cr2.value()).isEqualTo("foobar");
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@PropertySource("classpath:/org/springframework/cloud/stream/binder/kstream/integTest-1.properties")
public static class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000)).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, rec, ex) -> 0;
}
}
}

View File

@@ -1,285 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.mock.mockito.SpyBean;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.verify;
/**
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@ContextConfiguration
@DirtiesContext
public abstract class DeserializtionErrorHandlerByBinderTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"foos", "goos",
"counts-id", "error.foos.foobar-group", "error.goos.foobar-group", "error.foos1.fooz-group",
"error.foos2.fooz-group");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<Integer, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("kafka-streams-dlq-tests", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=foos",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqPartitions=1",
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqTests
extends DeserializtionErrorHandlerByBinderTests {
@Test
@Ignore
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault(1, 7, "hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1,
"error.foos.foobar-group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos.foobar-group");
assertThat(cr.value()).isEqualTo("hello");
assertThat(cr.partition()).isEqualTo(0);
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=goos",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqPartitions=1",
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqSetOnConsumerBindingTests
extends DeserializtionErrorHandlerByBinderTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("goos");
template.sendDefault(1, 7, "hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1,
"error.goos.foobar-group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.goos.foobar-group");
assertThat(cr.value()).isEqualTo("hello");
assertThat(cr.partition()).isEqualTo(0);
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=foos1,foos2",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTestsWithMultipleInputs",
"spring.cloud.stream.bindings.input.group=fooz-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqTestsWithMultipleInputs
extends DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos1");
template.sendDefault("hello");
template.setDefaultTopic("foos2");
template.sendDefault("hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.foos1.fooz-group",
"error.foos2.fooz-group");
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos1.fooz-group");
assertThat(cr1.value().equals("hello")).isTrue();
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos2.fooz-group");
assertThat(cr2.value().equals("hello")).isTrue();
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id, value));
}
}
static class Product {
Integer id;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -20,11 +20,13 @@ import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.apache.kafka.streams.kstream.KStream;
import org.assertj.core.util.Lists;
import org.junit.Assert;
@@ -38,13 +40,11 @@ import org.springframework.boot.actuate.health.CompositeHealthContributor;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.KafkaStreamsCustomizer;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -52,7 +52,6 @@ import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
@@ -178,7 +177,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
embeddedKafka.consumeFromEmbeddedTopics(consumer, topics);
KafkaTestUtils.getRecords(consumer, 1000);
TimeUnit.SECONDS.sleep(2);
TimeUnit.SECONDS.sleep(5);
checkHealth(context, expected);
}
finally {
@@ -203,6 +202,8 @@ public class KafkaStreamsBinderHealthIndicatorTests {
SpringApplication app = new SpringApplication(KStreamApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
return app.run("--server.port=0", "--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=in",
"--spring.cloud.stream.bindings.output.destination=out",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -221,6 +222,11 @@ public class KafkaStreamsBinderHealthIndicatorTests {
SpringApplication app = new SpringApplication(AnotherKStreamApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
return app.run("--server.port=0", "--spring.jmx.enabled=false",
"--spring.cloud.function.definition=process;process2",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.function.bindings.process2-in-0=input2",
"--spring.cloud.stream.function.bindings.process2-out-0=output2",
"--spring.cloud.stream.bindings.input.destination=in",
"--spring.cloud.stream.bindings.output.destination=out",
"--spring.cloud.stream.bindings.input2.destination=in2",
@@ -238,14 +244,12 @@ public class KafkaStreamsBinderHealthIndicatorTests {
+ embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class KStreamApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Object, Product> process(KStream<Object, Product> input) {
return input.filter((key, product) -> {
@Bean
public Function<KStream<Object, Product>, KStream<Object, Product>> process() {
return input -> input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
@@ -255,14 +259,12 @@ public class KafkaStreamsBinderHealthIndicatorTests {
}
@EnableBinding({ KafkaStreamsProcessor.class, KafkaStreamsProcessorX.class })
@EnableAutoConfiguration
public static class AnotherKStreamApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Object, Product> process(KStream<Object, Product> input) {
return input.filter((key, product) -> {
@Bean
public Function<KStream<Object, Product>, KStream<Object, Product>> process() {
return input -> input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
@@ -270,10 +272,9 @@ public class KafkaStreamsBinderHealthIndicatorTests {
});
}
@StreamListener("input2")
@SendTo("output2")
public KStream<Object, Product> process2(KStream<Object, Product> input) {
return input.filter((key, product) -> {
@Bean
public Function<KStream<Object, Product>, KStream<Object, Product>> process2() {
return input -> input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
@@ -281,15 +282,18 @@ public class KafkaStreamsBinderHealthIndicatorTests {
});
}
}
public interface KafkaStreamsProcessorX {
@Input("input2")
KStream<?, ?> input();
@Output("output2")
KStream<?, ?> output();
@Bean
public StreamsBuilderFactoryBeanConfigurer customizer() {
return factoryBean -> {
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
@Override
public void customize(KafkaStreams kafkaStreams) {
kafkaStreams.setUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
}
});
};
}
}
@@ -306,5 +310,4 @@ public class KafkaStreamsBinderHealthIndicatorTests {
}
}
}

View File

@@ -20,15 +20,16 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -37,18 +38,15 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -98,6 +96,8 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=words1,words2",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
@@ -144,29 +144,26 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
assertThat(wordCounts.contains("{\"word\":\"foobar2\",\"count\":1}")).isTrue();
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class WordCountProcessorApplication {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
input.map((k, v) -> {
System.out.println(k);
System.out.println(v);
return new KeyValue<>(k, v);
});
return input
return input -> input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts")).toStream()
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts-tKWCWSIAP0")).toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key, value)));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
static class WordCount {

View File

@@ -16,16 +16,18 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -35,10 +37,8 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -46,7 +46,6 @@ import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -88,6 +87,8 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -121,24 +122,19 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
assertThat(cr.value()).isEqualTo(1L);
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
@Bean
public Function<KStream<Object, Product>, KStream<Integer, Long>> process() {
return input -> input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> {
return new KeyValue<>(key.key().id, value);
});
.map((key, value) -> new KeyValue<>(key.key().id, value));
}
}
public static class Product {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,6 +21,7 @@ import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.Properties;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -42,10 +43,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
@@ -57,7 +54,6 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -66,11 +62,11 @@ import static org.assertj.core.api.Assertions.assertThat;
* @author Soby Chacko
* @author Gary Russell
*/
public class KafkaStreamsBinderWordCountIntegrationTests {
public class KafkaStreamsBinderTombstoneTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1");
"counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -85,7 +81,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts-1");
}
@AfterClass
@@ -93,31 +89,6 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
consumer.close();
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
}
}
@Test
public void testSendToTombstone()
throws Exception {
@@ -127,24 +98,22 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words-1",
"--spring.cloud.stream.bindings.output.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
assertThat(kafkaStreams).isNotNull();
// Ensure that concurrency settings are mapped to number of stream task
@@ -200,26 +169,21 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class WordCountProcessorApplication {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
@Bean
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))));
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean

View File

@@ -1,186 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.mock.mockito.SpyBean;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.StopWatch;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.verify;
/**
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@ContextConfiguration
@DirtiesContext
public abstract class KafkaStreamsNativeEncodingDecodingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"decode-counts", "decode-counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "decode-counts", "decode-counts-1");
}
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=decode-words-1",
"spring.cloud.stream.bindings.output.destination=decode-counts-1",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-abc" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class NativeEncodingDecodingEnabledTests
extends KafkaStreamsNativeEncodingDecodingTests {
@Test
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("decode-words-1");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"decode-counts-1");
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.bindings.input.destination=decode-words",
"spring.cloud.stream.bindings.output.destination=decode-counts",
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.kafka.streams.bindings.input3.consumer.applicationId"
+ "=hello-NativeEncodingDecodingEnabledTests-xyz" })
public static class NativeEncodingDecodingDisabledTests
extends KafkaStreamsNativeEncodingDecodingTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("decode-words");
template.sendDefault("foobar");
StopWatch stopWatch = new StopWatch();
stopWatch.start();
System.out.println("Starting: ");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"decode-counts");
stopWatch.stop();
System.out.println("Total time: " + stopWatch.getTotalTimeSeconds());
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(conversionDelegate).serializeOnOutbound(any(KStream.class));
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}
}
}

View File

@@ -16,7 +16,10 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import java.util.function.BiConsumer;
import java.util.function.Consumer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
@@ -31,11 +34,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -66,6 +64,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.bindings.input.destination=foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
@@ -88,41 +87,14 @@ public class KafkaStreamsStateStoreIntegrationTests {
}
}
@Test
public void testKstreamStateStoreBuilderBeansDefinedInApplication() throws Exception {
SpringApplication app = new SpringApplication(StateStoreBeanApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input3.destination=foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input3.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-xyzabc-123",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
Thread.sleep(2000);
receiveAndValidateFoo(context, StateStoreBeanApplication.class);
}
catch (Exception e) {
throw e;
}
finally {
context.close();
}
}
@Test
public void testSameStateStoreIsCreatedOnlyOnceWhenMultipleInputBindingsArePresent() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplicationWithMultipleInputBindings.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input1",
"--spring.cloud.stream.function.bindings.process-in-1=input2",
"--spring.cloud.stream.bindings.input1.destination=foobar",
"--spring.cloud.stream.bindings.input2.destination=hello-foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -170,22 +142,12 @@ public class KafkaStreamsStateStoreIntegrationTests {
assertThat(state.persistent()).isTrue();
assertThat(productCount.processed).isTrue();
}
else if (clazz.isAssignableFrom(StateStoreBeanApplication.class)) {
StateStoreBeanApplication productCount = context
.getBean(StateStoreBeanApplication.class);
WindowStore<Object, String> state = productCount.state;
assertThat(state != null).isTrue();
assertThat(state.name()).isEqualTo("mystate");
assertThat(state.persistent()).isTrue();
assertThat(productCount.processed).isTrue();
}
else {
fail("Expected assertiond did not happen");
fail("Expected assertions did not happen");
}
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@@ -193,46 +155,10 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener("input")
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(KStream<Object, Product> input) {
@Bean
public Consumer<KStream<Object, Product>> process() {
input.process(() -> new Processor<Object, Product>() {
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
}
}
}, "mystate");
}
}
@EnableBinding(KafkaStreamsProcessorZ.class)
@EnableAutoConfiguration
public static class StateStoreBeanApplication {
WindowStore<Object, String> state;
boolean processed;
@StreamListener("input3")
@SuppressWarnings({"unchecked" })
public void process(KStream<Object, Product> input) {
input.process(() -> new Processor<Object, Product>() {
return input -> input.process(() -> new Processor<Object, Product>() {
@Override
public void init(ProcessorContext processorContext) {
@@ -257,13 +183,11 @@ public class KafkaStreamsStateStoreIntegrationTests {
public StoreBuilder mystore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("mystate",
3L, 3, 3L, false), Serdes.String(),
Duration.ofMillis(3), Duration.ofMillis(3), false), Serdes.String(),
Serdes.String());
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class ProductCountApplicationWithMultipleInputBindings {
@@ -271,33 +195,41 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(@Input("input1")KStream<Object, Product> input, @Input("input2")KStream<Object, Product> input2) {
@Bean
public BiConsumer<KStream<Object, Product>, KStream<Object, Product>> process() {
input.process(() -> new Processor<Object, Product>() {
return (input, input2) -> {
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
input.process(() -> new Processor<Object, Product>() {
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
}
}, "mystate");
//simple use of input2, we are not using input2 for anything other than triggering some test behavior.
input2.foreach((key, value) -> { });
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
}
}
}, "mystate");
//simple use of input2, we are not using input2 for anything other than triggering some test behavior.
input2.foreach((key, value) -> { });
};
}
@Bean
public StoreBuilder mystore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("mystate",
Duration.ofMillis(3), Duration.ofMillis(3), false), Serdes.String(),
Serdes.String());
}
}
@@ -314,25 +246,4 @@ public class KafkaStreamsStateStoreIntegrationTests {
}
}
interface KafkaStreamsProcessorX {
@Input("input")
KStream<?, ?> input();
}
interface KafkaStreamsProcessorY {
@Input("input1")
KStream<?, ?> input1();
@Input("input2")
KStream<?, ?> input2();
}
interface KafkaStreamsProcessorZ {
@Input("input3")
KStream<?, ?> input3();
}
}

View File

@@ -16,15 +16,17 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -34,10 +36,8 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
@@ -48,7 +48,6 @@ import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -90,6 +89,8 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -104,11 +105,11 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
receiveAndValidateFoo();
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-ProductCountApplication-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();
assertThat(cleanup.cleanupOnStop()).isTrue();
assertThat(cleanup.cleanupOnStop()).isFalse();
}
finally {
context.close();
@@ -127,19 +128,16 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
assertThat(cr.value().contains("Count for product with ID 123: 1")).isTrue();
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, String> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
@Bean
public Function<KStream<Object, Product>, KStream<Integer, String>> process() {
return input -> input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id,
"Count for product with ID 123: " + value));

View File

@@ -1,95 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.stereotype.Component;
import static org.assertj.core.api.Assertions.assertThat;
public class MultiProcessorsWithSameNameAndBindingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesAndBindingsPresent() {
SpringApplication app = new SpringApplication(
MultiProcessorsWithSameNameAndBindingTests.WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.input-1.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean1 = context
.getBean("&stream-builder-Foo-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean1).isNotNull();
StreamsBuilderFactoryBean streamsBuilderFactoryBean2 = context
.getBean("&stream-builder-Bar-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean2).isNotNull();
}
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
static class WordCountProcessorApplication {
@Component
static class Foo {
@StreamListener
public void process(@Input("input-1") KStream<Object, String> input) {
}
}
//Second class with a stub processor that has the same name as above ("process")
@Component
static class Bar {
@StreamListener
public void process(@Input("input-1") KStream<Object, String> input) {
}
}
}
interface KafkaStreamsProcessorX {
@Input("input-1")
KStream<?, ?> input1();
}
}

View File

@@ -1,146 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class OutboundValueNullSkippedConversionTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(
OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class OutboundNullApplication {
@StreamListener
@SendTo("output")
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
@Input("input") KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -1,184 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.io.IOException;
import java.util.Map;
import java.util.Random;
import java.util.UUID;
import com.example.Sensor;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.integration.utils.TestAvroSerializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.MimeTypeUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class PerRecordAvroContentTypeTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"received-sensors");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, byte[]> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("avro-ct-test",
"false", embeddedKafka);
// Receive the data as byte[]
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, byte[]> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "received-sensors");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testPerRecordAvroConentTypeAndVerifySerialization() throws Exception {
SpringApplication app = new SpringApplication(SensorCountAvroApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.bindings.input.destination=sensors",
"--spring.cloud.stream.bindings.output.destination=received-sensors",
"--spring.cloud.stream.bindings.output.contentType=application/avro",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=per-record-avro-contentType-test",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
// Use a custom avro test serializer
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
TestAvroSerializer.class);
DefaultKafkaProducerFactory<Integer, Sensor> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, Sensor> template = new KafkaTemplate<>(pf, true);
Random random = new Random();
Sensor sensor = new Sensor();
sensor.setId(UUID.randomUUID().toString() + "-v1");
sensor.setAcceleration(random.nextFloat() * 10);
sensor.setVelocity(random.nextFloat() * 100);
sensor.setTemperature(random.nextFloat() * 50);
// Send with avro content type set.
Message<?> message = MessageBuilder.withPayload(sensor)
.setHeader("contentType", "application/avro").build();
template.setDefaultTopic("sensors");
template.send(message);
// Serialized byte[] ^^ is received by the binding process and deserialzed
// it using avro converter.
// Then finally, the data will be output to a return topic as byte[]
// (using the same avro converter).
// Receive the byte[] from return topic
ConsumerRecord<String, byte[]> cr = KafkaTestUtils
.getSingleRecord(consumer, "received-sensors");
final byte[] value = cr.value();
// Convert the byte[] received back to avro object and verify that it is
// the same as the one we sent ^^.
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
Message<?> receivedMessage = MessageBuilder.withPayload(value)
.setHeader("contentType",
MimeTypeUtils.parseMimeType("application/avro"))
.build();
Sensor messageConverted = (Sensor) avroSchemaMessageConverter
.fromMessage(receivedMessage, Sensor.class);
assertThat(messageConverted).isEqualTo(sensor);
}
finally {
pf.destroy();
}
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class SensorCountAvroApplication {
@StreamListener
@SendTo("output")
public KStream<?, Sensor> process(@Input("input") KStream<Object, Sensor> input) {
// return the same Sensor object unchanged so that we can do test
// verifications
return input.map(KeyValue::new);
}
@Bean
public MessageConverter sensorMessageConverter() throws IOException {
return new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
}
}
}

View File

@@ -1,383 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToGlobalKTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("input");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-y");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer,
true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
customer.setName("customer-" + i);
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct,
true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
final Product product = new Product();
product.setName("product-" + i);
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
for (long i = 0; i < 5; i++) {
final Order order = new Order();
order.setCustomerId(i);
order.setProductId(i);
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
JsonDeserializer.class);
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders
.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i))
.isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i))
.isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
pfOrder.destroy();
consumer.close();
}
finally {
context.close();
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(
@Input("input") KStream<Long, Order> ordersStream,
@Input("input-x") GlobalKTable<Long, Customer> customers,
@Input("input-y") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
customers, (orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
return customerId;
}
public void setCustomerId(long customerId) {
this.customerId = customerId;
}
public long getProductId() {
return productId;
}
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class Product {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}
public Customer getCustomer() {
return customer;
}
public void setCustomer(Customer customer) {
this.customer = customer;
}
public Order getOrder() {
return order;
}
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
this.customer = customer;
this.order = order;
}
long productId() {
return order.getProductId();
}
}
}

View File

@@ -1,497 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("output");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L), new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L), new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L), new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L), new KeyValue<>("fang", 99L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest()
throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
// Produce data first to the input topic to test the startOffset setting on the
// binding (which is set to earliest below).
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-2");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
// Thread.sleep(10000L);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-2");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
new KeyValue<>("bob", 4L), new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L), new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L), new KeyValue<>("fang", 99L));
for (KeyValue<String, Long> keyValue : userClicks1) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 56L),
new KeyValue<>("asia", 124L),
// 1000 alice entries which were there in the topic before the
// consumer started.
// Since we set the startOffset to earliest for the topic, it will
// read them,
// but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
System.out.println("foobar: " + record.key() + "::" + record.value());
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
// TODO: Matched count is 3 and not 4 (expectedClicksPerRegion.size()) when running with full suite. Investigate why.
// TODO: This behavior is only observed after the Spring Kafka upgrade to 2.5.0 and kafka client to 2.5.
// TODO: Note that the test passes fine as a single test.
assertThat(count).matches(
matchedCount -> matchedCount == expectedClicksPerRegion.size() - 1 || matchedCount == expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).containsAnyElementsOf(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(
TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
public static class CountClicksPerRegionApplication {
@StreamListener
@SendTo("output")
public KStream<String, Long> process(
@Input("input") KStream<String, Long> userClicksStream,
@Input("input-x") KTable<String, String> userRegionsTable) {
return userClicksStream
.leftJoin(userRegionsTable,
(clicks, region) -> new RegionWithClicks(
region == null ? "UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream();
}
//This forces the state stores to be cleaned up before running the test.
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class TrivialKTableApp {
@StreamListener("input-y")
public void process(KTable<String, String> inputTable) {
inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
@Input("input-x")
KTable<?, ?> inputX();
}
interface KafkaStreamsProcessorY {
@Input("input-y")
KTable<?, ?> inputY();
}
/**
* Tuple for a region and its associated number of clicks.
*/
private static final class RegionWithClicks {
private final String region;
private final long clicks;
RegionWithClicks(String region, long clicks) {
if (region == null || region.isEmpty()) {
throw new IllegalArgumentException("region must be set");
}
if (clicks < 0) {
throw new IllegalArgumentException("clicks must not be negative");
}
this.region = region;
this.clicks = clicks;
}
public String getRegion() {
return region;
}
public long getClicks() {
return clicks;
}
}
interface BindingsForTwoKStreamJoinTest {
String INPUT_1 = "input_1";
String INPUT_2 = "input_2";
@Input(INPUT_1)
KStream<String, String> input_1();
@Input(INPUT_2)
KStream<String, String> input_2();
}
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
@EnableAutoConfiguration
public static class JoinProcessor {
@StreamListener
public void testProcessor(
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
Joined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -1,236 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
*/
public class WordCountMultipleBranchesIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@StreamListener("input")
@SendTo({ "output1", "output2", "output3" })
@SuppressWarnings("unchecked")
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value).windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-multi")).toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -1,63 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration.utils;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
/**
* Custom avro serializer intended to be used for testing only.
*
* @param <S> Target type to serialize
* @author Soby Chacko
*/
public class TestAvroSerializer<S> implements Serializer<S> {
public TestAvroSerializer() {
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, S data) {
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, "application/avro");
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = avroSchemaMessageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
}
}

View File

@@ -1,74 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.UUID;
import com.example.Sensor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.Test;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.converter.MessageConverter;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Refer {@link MessageConverterDelegateSerde} for motivations.
*
* @author Soby Chacko
*/
public class MessageConverterDelegateSerdeTest {
@Test
@SuppressWarnings("unchecked")
public void testCompositeNonNativeSerdeUsingAvroContentType() {
Random random = new Random();
Sensor sensor = new Sensor();
sensor.setId(UUID.randomUUID().toString() + "-v1");
sensor.setAcceleration(random.nextFloat() * 10);
sensor.setVelocity(random.nextFloat() * 100);
sensor.setTemperature(random.nextFloat() * 50);
List<MessageConverter> messageConverters = new ArrayList<>();
messageConverters.add(new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl()));
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
messageConverters, new ObjectMapper());
MessageConverterDelegateSerde messageConverterDelegateSerde = new MessageConverterDelegateSerde(
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
Map<String, Object> configs = new HashMap<>();
configs.put("valueClass", Sensor.class);
configs.put("contentType", "application/avro");
messageConverterDelegateSerde.configure(configs, false);
final byte[] serialized = messageConverterDelegateSerde.serializer().serialize(null,
sensor);
final Object deserialized = messageConverterDelegateSerde.deserializer()
.deserialize(null, serialized);
assertThat(deserialized).isEqualTo(sensor);
}
}

View File

@@ -1,11 +0,0 @@
{
"namespace" : "com.example",
"type" : "record",
"name" : "Sensor",
"fields" : [
{"name":"id","type":"string"},
{"name":"temperature", "type":"float", "default":0.0},
{"name":"acceleration", "type":"float","default":0.0},
{"name":"velocity","type":"float","default":0.0}
]
}

View File

@@ -1,6 +0,0 @@
spring.cloud.stream.bindings.input.destination=DeserializationErrorHandlerByKafkaTests-In
spring.cloud.stream.bindings.output.destination=DeserializationErrorHandlerByKafkaTests-Out
spring.cloud.stream.bindings.output.contentType=application/json
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.7-SNAPSHOT</version>
<version>4.0.0-SNAPSHOT</version>
</parent>
<dependencies>
@@ -75,6 +75,11 @@
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,29 @@
/*
* Copyright 2022-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.springframework.boot.actuate.health.HealthIndicator;
/**
* Marker interface used for custom KafkaBinderHealth indicator implementations.
*
* @author Soby Chacko
* @since 3.2.2
*/
public interface KafkaBinderHealth extends HealthIndicator {
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2021 the original author or authors.
* Copyright 2016-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -35,7 +35,6 @@ import org.apache.kafka.common.PartitionInfo;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.actuate.health.StatusAggregator;
import org.springframework.kafka.core.ConsumerFactory;
@@ -55,7 +54,7 @@ import org.springframework.scheduling.concurrent.CustomizableThreadFactory;
* @author Chukwubuikem Ume-Ugwa
* @author Taras Danylchuk
*/
public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBean {
public class KafkaBinderHealthIndicator implements KafkaBinderHealth, DisposableBean {
private static final int DEFAULT_TIMEOUT = 60;
@@ -73,7 +72,7 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
private boolean considerDownWhenAnyPartitionHasNoLeader;
public KafkaBinderHealthIndicator(KafkaMessageChannelBinder binder,
ConsumerFactory<?, ?> consumerFactory) {
ConsumerFactory<?, ?> consumerFactory) {
this.binder = binder;
this.consumerFactory = consumerFactory;
}
@@ -201,10 +200,12 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
for (AbstractMessageListenerContainer<?, ?> container : listenerContainers) {
Map<String, Object> containerDetails = new HashMap<>();
boolean isRunning = container.isRunning();
if (!isRunning) {
boolean isOk = container.isInExpectedState();
if (!isOk) {
status = Status.DOWN;
}
containerDetails.put("isRunning", isRunning);
containerDetails.put("isStoppedAbnormally", !isRunning && !isOk);
containerDetails.put("isPaused", container.isContainerPaused());
containerDetails.put("listenerId", container.getListenerId());
containerDetails.put("groupId", container.getGroupId());
@@ -217,7 +218,7 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
}
@Override
public void destroy() throws Exception {
public void destroy() {
executor.shutdown();
}

View File

@@ -32,6 +32,7 @@ import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.BiFunction;
import java.util.function.Predicate;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
@@ -103,12 +104,15 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
import org.springframework.kafka.listener.ConsumerProperties;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultAfterRollbackProcessor;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.ExponentialBackOffWithMaxRetries;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.ProducerListener;
@@ -675,7 +679,7 @@ public class KafkaMessageChannelBinder extends
concurrency = extendedConsumerProperties.getConcurrency();
}
resetOffsetsForAutoRebalance(extendedConsumerProperties, consumerFactory, containerProperties);
containerProperties.setAuthorizationExceptionRetryInterval(this.configurationProperties.getAuthorizationExceptionRetryInterval());
containerProperties.setAuthExceptionRetryInterval(this.configurationProperties.getAuthorizationExceptionRetryInterval());
@SuppressWarnings("rawtypes")
final ConcurrentMessageListenerContainer<?, ?> messageListenerContainer = new ConcurrentMessageListenerContainer(
consumerFactory, containerProperties) {
@@ -735,14 +739,23 @@ public class KafkaMessageChannelBinder extends
kafkaMessageDrivenChannelAdapter.setApplicationContext(applicationContext);
ErrorInfrastructure errorInfrastructure = registerErrorInfrastructure(destination,
consumerGroup, extendedConsumerProperties);
ListenerContainerCustomizer<?> customizer = getContainerCustomizer();
if (!extendedConsumerProperties.isBatchMode()
&& extendedConsumerProperties.getMaxAttempts() > 1
&& transMan == null) {
kafkaMessageDrivenChannelAdapter
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter
.setRecoveryCallback(errorInfrastructure.getRecoverer());
if (!(customizer instanceof ListenerContainerWithDlqAndRetryCustomizer)
|| ((ListenerContainerWithDlqAndRetryCustomizer) customizer)
.retryAndDlqInBinding(destination.getName(), group)) {
kafkaMessageDrivenChannelAdapter
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter
.setRecoveryCallback(errorInfrastructure.getRecoverer());
}
if (!extendedConsumerProperties.getExtension().isEnableDlq()) {
messageListenerContainer.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(0L, 0L)));
}
}
else if (!extendedConsumerProperties.isBatchMode() && transMan != null) {
messageListenerContainer.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(
@@ -775,17 +788,49 @@ public class KafkaMessageChannelBinder extends
else {
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorInfrastructure.getErrorChannel());
}
this.getContainerCustomizer().configure(messageListenerContainer, destination.getName(), group);
final String commonErrorHandlerBeanName = extendedConsumerProperties.getExtension().getCommonErrorHandlerBeanName();
if (StringUtils.hasText(commonErrorHandlerBeanName)) {
final CommonErrorHandler commonErrorHandler = getApplicationContext().getBean(commonErrorHandlerBeanName,
CommonErrorHandler.class);
messageListenerContainer.setCommonErrorHandler(commonErrorHandler);
}
if (customizer instanceof ListenerContainerWithDlqAndRetryCustomizer) {
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver = createDestResolver(
extendedConsumerProperties.getExtension());
BackOff createBackOff = extendedConsumerProperties.getMaxAttempts() > 1
? createBackOff(extendedConsumerProperties)
: null;
((ListenerContainerWithDlqAndRetryCustomizer) customizer)
.configure(messageListenerContainer, destination.getName(), consumerGroup, destinationResolver,
createBackOff);
}
else {
((ListenerContainerCustomizer<Object>) customizer)
.configure(messageListenerContainer, destination.getName(), consumerGroup);
}
this.ackModeInfo.put(destination, messageListenerContainer.getContainerProperties().getAckMode());
return kafkaMessageDrivenChannelAdapter;
}
private BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> createDestResolver(
KafkaConsumerProperties extension) {
Integer dlqPartitions = extension.getDlqPartitions();
if (extension.isEnableDlq()) {
return (rec, ex) -> dlqPartitions == null || dlqPartitions > 1
? new TopicPartition(extension.getDlqName(), rec.partition())
: new TopicPartition(extension.getDlqName(), 0);
}
else {
return null;
}
}
/**
* Configure a {@link BackOff} for the after rollback processor, based on the consumer
* retry properties. If retry is disabled, return a {@link BackOff} that disables
* retry. Otherwise calculate the {@link ExponentialBackOff#setMaxElapsedTime(long)}
* so that the {@link BackOff} stops after the configured
* {@link ExtendedConsumerProperties#getMaxAttempts()}.
* retry. Otherwise use an {@link ExponentialBackOffWithMaxRetries}.
* @param extendedConsumerProperties the properties.
* @return the backoff.
*/
@@ -797,20 +842,10 @@ public class KafkaMessageChannelBinder extends
return new FixedBackOff(0L, 0L);
}
int initialInterval = extendedConsumerProperties.getBackOffInitialInterval();
double multiplier = extendedConsumerProperties.getBackOffMultiplier();
int maxInterval = extendedConsumerProperties.getBackOffMaxInterval();
ExponentialBackOff backOff = new ExponentialBackOff(initialInterval, multiplier);
ExponentialBackOff backOff = new ExponentialBackOffWithMaxRetries(maxAttempts - 1);
backOff.setInitialInterval(initialInterval);
backOff.setMaxInterval(maxInterval);
long maxElapsed = extendedConsumerProperties.getBackOffInitialInterval();
double accum = maxElapsed;
for (int i = 1; i < maxAttempts - 1; i++) {
accum = accum * multiplier;
if (accum > maxInterval) {
accum = maxInterval;
}
maxElapsed += accum;
}
backOff.setMaxElapsedTime(maxElapsed);
return backOff;
}

View File

@@ -0,0 +1,71 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.function.BiFunction;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.lang.Nullable;
import org.springframework.util.backoff.BackOff;
/**
* An extension of {@link ListenerContainerCustomizer} that provides access to dead letter
* metadata.
*
* @author Gary Russell
* @since 3.2
*
*/
public interface ListenerContainerWithDlqAndRetryCustomizer
extends ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> {
@Override
default void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group) {
}
/**
* Configure the container.
* @param container the container.
* @param destinationName the destination name.
* @param group the group.
* @param dlqDestinationResolver a destination resolver for the dead letter topic (if
* enableDlq).
* @param backOff the backOff using retry properties (if configured).
* @see #retryAndDlqInBinding(String, String)
*/
void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff);
/**
* Return false to move retries and DLQ from the binding to a customized error handler
* using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when
* configured via
* {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}.
* @param destinationName the destination name.
* @param group the group.
* @return true to disable retrie in the binding
*/
default boolean retryAndDlqInBinding(String destinationName, String group) {
return true;
}
}

View File

@@ -22,14 +22,12 @@ import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.StreamMessageConverter;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderMetrics;
import org.springframework.cloud.stream.binder.kafka.KafkaBindingRebalanceListener;
@@ -81,22 +79,12 @@ import org.springframework.messaging.converter.MessageConverter;
* @author Artem Bilan
* @author Aldo Sinanaj
*/
@Configuration
@Configuration(proxyBeanMethods = false)
@ConditionalOnMissingBean(Binder.class)
@Import({ KafkaAutoConfiguration.class, KafkaBinderHealthIndicatorConfiguration.class })
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
public class KafkaBinderConfiguration {
@Autowired
private KafkaExtendedBindingProperties kafkaExtendedBindingProperties;
@SuppressWarnings("rawtypes")
@Autowired
private ProducerListener producerListener;
@Autowired
private KafkaProperties kafkaProperties;
@Bean
KafkaBinderConfigurationProperties configurationProperties(
KafkaProperties kafkaProperties) {
@@ -106,12 +94,12 @@ public class KafkaBinderConfiguration {
@Bean
KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties configurationProperties,
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer, KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(configurationProperties,
this.kafkaProperties, adminClientConfigCustomizer.getIfUnique());
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@SuppressWarnings("unchecked")
@SuppressWarnings({"rawtypes", "unchecked"})
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder(
KafkaBinderConfigurationProperties configurationProperties,
@@ -125,16 +113,17 @@ public class KafkaBinderConfiguration {
ObjectProvider<DlqDestinationResolver> dlqDestinationResolver,
ObjectProvider<ClientFactoryCustomizer> clientFactoryCustomizer,
ObjectProvider<ConsumerConfigCustomizer> consumerConfigCustomizer,
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer,
ProducerListener producerListener, KafkaExtendedBindingProperties kafkaExtendedBindingProperties
) {
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
configurationProperties, provisioningProvider,
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique(),
dlqPartitionFunction.getIfUnique(), dlqDestinationResolver.getIfUnique());
kafkaMessageChannelBinder.setProducerListener(this.producerListener);
kafkaMessageChannelBinder.setProducerListener(producerListener);
kafkaMessageChannelBinder
.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
.setExtendedBindingProperties(kafkaExtendedBindingProperties);
kafkaMessageChannelBinder.setProducerMessageHandlerCustomizer(messageHandlerCustomizer);
kafkaMessageChannelBinder.setConsumerEndpointCustomizer(consumerCustomizer);
kafkaMessageChannelBinder.setClientFactoryCustomizer(clientFactoryCustomizer.getIfUnique());
@@ -151,7 +140,6 @@ public class KafkaBinderConfiguration {
}
@Bean
@StreamMessageConverter
@ConditionalOnMissingBean(KafkaNullConverter.class)
MessageConverter kafkaNullConverter() {
return new KafkaNullConverter();

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -24,6 +24,8 @@ import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderHealth;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
@@ -38,11 +40,13 @@ import org.springframework.util.ObjectUtils;
*
* @author Oleg Zhurakousky
* @author Chukwubuikem Ume-Ugwa
* @author Soby Chacko
*/
@Configuration
@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
@ConditionalOnEnabledHealthIndicator("binders")
@ConditionalOnMissingBean(KafkaBinderHealth.class)
public class KafkaBinderHealthIndicatorConfiguration {
@Bean

View File

@@ -108,8 +108,8 @@ public class KafkaBinderHealthIndicatorTest {
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, true);
mockContainer(listenerContainerB, true);
mockContainer(listenerContainerA, true, true);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
@@ -127,8 +127,27 @@ public class KafkaBinderHealthIndicatorTest {
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, false);
mockContainer(listenerContainerB, true);
mockContainer(listenerContainerA, false, true);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
assertThat(health.getDetails()).containsEntry("topicsInUse", singleton(TEST_TOPIC));
assertThat(health.getDetails()).hasEntrySatisfying("listenerContainers", value ->
assertThat((ArrayList<?>) value).hasSize(2));
}
@Test
public void kafkaBinderIsDownWhenOneOfContainersWasStoppedAbnormally() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation(
"group1-healthIndicator", partitions, false));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC))
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, false, false);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
@@ -137,11 +156,14 @@ public class KafkaBinderHealthIndicatorTest {
assertThat((ArrayList<?>) value).hasSize(2));
}
private void mockContainer(AbstractMessageListenerContainer<?, ?> container, boolean isRunning) {
private void mockContainer(AbstractMessageListenerContainer<?, ?> container, boolean isRunning,
boolean normalState) {
org.mockito.BDDMockito.given(container.isRunning()).willReturn(isRunning);
org.mockito.BDDMockito.given(container.isContainerPaused()).willReturn(true);
org.mockito.BDDMockito.given(container.getListenerId()).willReturn("someListenerId");
org.mockito.BDDMockito.given(container.getGroupId()).willReturn("someGroupId");
org.mockito.BDDMockito.given(container.isInExpectedState()).willReturn(normalState);
}
@Test

View File

@@ -66,12 +66,11 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.assertj.core.api.Assertions;
import org.assertj.core.api.Condition;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
import org.awaitility.Awaitility;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInfo;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.cloud.stream.binder.Binder;
@@ -119,8 +118,11 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.listener.MessageListenerContainer;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
@@ -128,8 +130,10 @@ import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionOffset;
import org.springframework.kafka.support.converter.BatchMessagingMessageConverter;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
@@ -144,6 +148,7 @@ import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeTypeUtils;
import org.springframework.util.backoff.FixedBackOff;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
@@ -158,29 +163,28 @@ import static org.mockito.Mockito.mock;
* @author Henryk Konsek
* @author Gary Russell
*/
@EmbeddedKafka(count = 1, controlledShutdown = true, topics = "error.pollableDlq.group-pcWithDlq", brokerProperties = {"transaction.state.log.replication.factor=1",
"transaction.state.log.min.isr=1"})
public class KafkaBinderTests extends
// @checkstyle:off
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
// @checkstyle:on
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
@Rule
public ExpectedException expectedProvisioningException = ExpectedException.none();
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class
.getSimpleName();
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10,
"error.pollableDlq.group-pcWithDlq")
.brokerProperty("transaction.state.log.replication.factor", "1")
.brokerProperty("transaction.state.log.min.isr", "1");
private KafkaTestBinder binder;
private AdminClient adminClient;
private static EmbeddedKafkaBroker embeddedKafka;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
}
@Override
protected ExtendedConsumerProperties<KafkaConsumerProperties> createConsumerProperties() {
final ExtendedConsumerProperties<KafkaConsumerProperties> kafkaConsumerProperties = new ExtendedConsumerProperties<>(
@@ -191,8 +195,12 @@ public class KafkaBinderTests extends
return kafkaConsumerProperties;
}
private ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
return this.createProducerProperties(null);
}
@Override
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties(TestInfo testInto) {
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
producerProperties.getExtension().setSync(true);
@@ -246,8 +254,8 @@ public class KafkaBinderTests extends
private KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties(
new TestKafkaProperties());
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -274,15 +282,14 @@ public class KafkaBinderTests extends
return KafkaHeaders.OFFSET;
}
@Before
@BeforeEach
public void init() {
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
if (multiplier != null) {
timeoutMultiplier = Double.parseDouble(multiplier);
}
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -552,7 +559,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceiveNoOriginalContentType() throws Exception {
public void testSendAndReceiveNoOriginalContentType(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
BindingProperties producerBindingProperties = createProducerBindingProperties(
@@ -602,7 +609,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceive() throws Exception {
public void testSendAndReceive(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
BindingProperties outputBindingProperties = createProducerBindingProperties(
createProducerProperties());
@@ -728,7 +735,6 @@ public class KafkaBinderTests extends
@Test
@SuppressWarnings("unchecked")
@Ignore
public void testDlqWithNativeSerializationEnabledOnDlqProducer() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
@@ -787,12 +793,10 @@ public class KafkaBinderTests extends
.withPayload("foo").build();
moduleOutputChannel.send(message);
Message<?> receivedMessage = receive(dlqChannel, 5);
assertThat(receivedMessage).isNotNull();
assertThat(receivedMessage.getPayload()).isEqualTo("foo".getBytes());
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
Awaitility.await().until(() -> handler.getInvocationCount() == consumerProperties.getMaxAttempts());
assertThat(receivedMessage.getHeaders()
.get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo("foo.bar".getBytes(StandardCharsets.UTF_8));
@@ -1050,7 +1054,7 @@ public class KafkaBinderTests extends
AbstractMessageListenerContainer container = TestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
assertThat(container.getContainerProperties().getTopicPartitionsToAssign().length)
assertThat(container.getContainerProperties().getTopicPartitions().length)
.isEqualTo(4); // 2 topics 2 partitions each
if (transactional) {
assertThat(TestUtils.getPropertyValue(container.getAfterRollbackProcessor(), "kafkaTemplate")).isNotNull();
@@ -1061,7 +1065,7 @@ public class KafkaBinderTests extends
String dlqTopic = useDlqDestResolver ? "foo.dlq" : "error.dlqTest." + uniqueBindingId + ".0.testGroup";
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
if (useDlqDestResolver) {
List<NewTopic> nonProvisionedDlqTopics = new ArrayList<>();
NewTopic nTopic = new NewTopic(dlqTopic, 3, (short) 1);
@@ -1298,6 +1302,113 @@ public class KafkaBinderTests extends
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testRetriesWithoutDlq() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
// Since we don't have a DLQ, assert that we are invoking the handler exactly the same number of times
// as set in consumerProperties.maxAttempt and not the default set by Spring Kafka (10 times).
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testCommonErrorHandlerBeanNameOnConsumerBinding() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
CountDownLatch latch = new CountDownLatch(1);
CommonErrorHandler commonErrorHandler = new DefaultErrorHandler(new FixedBackOff(0L, 0L)) {
@Override
public void handleRemaining(Exception thrownException, List<ConsumerRecord<?, ?>> records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
super.handleRemaining(thrownException, records, consumer, container);
latch.countDown();
}
};
ConfigurableApplicationContext context = TestUtils.getPropertyValue(binder,
"binder.applicationContext", ConfigurableApplicationContext.class);
context.getBeanFactory().registerSingleton("fooCommonErrorHandler", commonErrorHandler);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
consumerProperties.getExtension().setCommonErrorHandlerBeanName("fooCommonErrorHandler");
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
//Assertions for the CommonErrorHandler configured on the consumer binding (commonErrorHandlerBeanName).
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer.commonErrorHandler")).isSameAs(commonErrorHandler);
latch.await(10, TimeUnit.SECONDS);
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870 for motivation for this test.
@Test
@SuppressWarnings("unchecked")
@@ -1459,9 +1570,15 @@ public class KafkaBinderTests extends
producerBinding.unbind();
}
@Test(expected = IllegalArgumentException.class)
@Test
public void testValidateKafkaTopicName() {
KafkaTopicUtils.validateTopicName("foo:bar");
try {
KafkaTopicUtils.validateTopicName("foo:bar");
fail("Expecting IllegalArgumentException");
}
catch (Exception e) {
// TODO: handle exception
}
}
@Test
@@ -1599,7 +1716,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceiveMultipleTopics() throws Exception {
public void testSendAndReceiveMultipleTopics(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
DirectChannel moduleOutputChannel1 = createBindableChannel("output1",
@@ -1760,7 +1877,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testTwoRequiredGroups() throws Exception {
public void testTwoRequiredGroups(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
@@ -1810,7 +1927,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testPartitionedModuleSpEL() throws Exception {
public void testPartitionedModuleSpEL(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
@@ -1933,7 +2050,7 @@ public class KafkaBinderTests extends
}
@Test
@Override
// @Override
@SuppressWarnings({ "unchecked", "rawtypes" })
public void testPartitionedModuleJava() throws Exception {
Binder binder = getBinder();
@@ -2021,7 +2138,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testAnonymousGroup() throws Exception {
public void testAnonymousGroup(TestInfo testInfo) throws Exception {
Binder binder = getBinder();
BindingProperties producerBindingProperties = createProducerBindingProperties(
createProducerProperties());
@@ -2888,14 +3005,13 @@ public class KafkaBinderTests extends
consumerProperties.setInstanceCount(3);
consumerProperties.setInstanceIndex(2);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
expectedProvisioningException.expect(ProvisioningException.class);
expectedProvisioningException.expectMessage(
"The number of expected partitions was: 3, but 1 has been found instead");
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
Assertions.assertThatThrownBy(() -> {
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
}).isInstanceOf(ProvisioningException.class);
}
@Test
@@ -2927,7 +3043,7 @@ public class KafkaBinderTests extends
binding,
"lifecycle.messageListenerContainer.containerProperties",
ContainerProperties.class);
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitionsToAssign();
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitions();
assertThat(listenedPartitions).hasSize(2);
assertThat(listenedPartitions).contains(
new TopicPartitionOffset(testTopicName, 2),
@@ -3290,7 +3406,7 @@ public class KafkaBinderTests extends
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(
"testSendAndReceiveWithMixedMode", "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class);
@@ -3335,7 +3451,7 @@ public class KafkaBinderTests extends
"pollable,anotherOne", "group-polledConsumer", inboundBindTarget,
consumerProps);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollable", "testPollable");
@@ -3386,7 +3502,7 @@ public class KafkaBinderTests extends
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableRequeue", "group", inboundBindTarget, properties);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollableRequeue", "testPollable");
@@ -3422,7 +3538,7 @@ public class KafkaBinderTests extends
properties.setBackOffInitialInterval(0);
properties.getExtension().setEnableDlq(true);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableDlq", "group-pcWithDlq", inboundBindTarget, properties);
KafkaTemplate template = new KafkaTemplate(
@@ -3441,11 +3557,11 @@ public class KafkaBinderTests extends
assertThat(e.getCause().getMessage()).isEqualTo("test DLQ");
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory cf = new DefaultKafkaConsumerFactory<>(consumerProps);
Consumer consumer = cf.createConsumer();
embeddedKafka.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer,
embeddedKafka.consumeFromAnEmbeddedTopic(consumer,
"error.pollableDlq.group-pcWithDlq");
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer,
"error.pollableDlq.group-pcWithDlq");
@@ -3460,7 +3576,7 @@ public class KafkaBinderTests extends
public void testTopicPatterns() throws Exception {
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("topicPatterns.1", 1, (short) 1))).all()
.get();
@@ -3479,7 +3595,7 @@ public class KafkaBinderTests extends
"topicPatterns\\..*", "testTopicPatterns", moduleInputChannel,
consumerProperties);
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(
KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka()));
KafkaTestUtils.producerProps(embeddedKafka));
KafkaTemplate template = new KafkaTemplate(pf);
template.send("topicPatterns.1", "foo");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
@@ -3489,11 +3605,12 @@ public class KafkaBinderTests extends
}
}
@Test(expected = TopicExistsException.class)
@Test
public void testSameTopicCannotBeProvisionedAgain() throws Throwable {
CountDownLatch latch = new CountDownLatch(1);
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1))).all()
.get();
@@ -3501,11 +3618,13 @@ public class KafkaBinderTests extends
admin.createTopics(Collections
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1)))
.all().get();
fail("Expecting TopicExistsException");
}
catch (Exception ex) {
assertThat(ex.getCause() instanceof TopicExistsException).isTrue();
throw ex.getCause();
latch.countDown();
}
latch.await(1, TimeUnit.SECONDS);
}
}
@@ -3707,7 +3826,7 @@ public class KafkaBinderTests extends
input.setBeanName(name + ".in");
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
Binding<MessageChannel> consumerBinding = binder.bindConsumer(name + ".0", name, input, consumerProperties);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send(MessageBuilder.withPayload("internalHeaderPropagation")
.setHeader(KafkaHeaders.TOPIC, name + ".0")
@@ -3721,7 +3840,7 @@ public class KafkaBinderTests extends
output.send(consumed);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(name, "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);

View File

@@ -0,0 +1,72 @@
/*
* Copyright 2022-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.bootstrap;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.NoSuchBeanDefinitionException;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderHealth;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderHealthIndicator;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
import static org.assertj.core.api.AssertionsForClassTypes.assertThatThrownBy;
/**
* @author Soby Chacko
*/
public class KafkaBinderCustomHealthCheckTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testCustomHealthIndicatorIsActivated() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
CustomHealthCheckApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
final KafkaBinderHealth kafkaBinderHealth = applicationContext.getBean(KafkaBinderHealth.class);
assertThat(kafkaBinderHealth).isInstanceOf(CustomHealthIndicator.class);
assertThatThrownBy(() -> applicationContext.getBean(KafkaBinderHealthIndicator.class)).isInstanceOf(NoSuchBeanDefinitionException.class);
applicationContext.close();
}
@SpringBootApplication
static class CustomHealthCheckApplication {
@Bean
public CustomHealthIndicator kafkaBinderHealthIndicator() {
return new CustomHealthIndicator();
}
}
static class CustomHealthIndicator implements KafkaBinderHealth {
@Override
public Health health() {
return null;
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,18 +16,28 @@
package org.springframework.cloud.stream.binder.kafka.bootstrap;
import java.util.Map;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.junit.ClassRule;
import org.junit.Test;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatCode;
@@ -35,20 +45,53 @@ import static org.assertj.core.api.Assertions.assertThatCode;
/**
* @author Soby Chacko
*/
@EmbeddedKafka(count = 1, controlledShutdown = true, partitions = 10, topics = "outputTopic")
public class KafkaBinderMeterRegistryTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
private static EmbeddedKafkaBroker embeddedKafka;
private static Consumer<String, String> consumer;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "outputTopic");
}
@AfterAll
public static void tearDown() {
consumer.close();
}
@Test
public void testMetricsWithSingleBinder() {
public void testMetricsWithSingleBinder() throws Exception {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.uppercase-in-0.destination=inputTopic",
"--spring.cloud.stream.bindings.uppercase-in-0.group=inputGroup",
"--spring.cloud.stream.bindings.uppercase-out-0.destination=outputTopic",
"--spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -68,10 +111,22 @@ public class KafkaBinderMeterRegistryTest {
"--spring.cloud.stream.binders.kafka2.type=kafka",
"--spring.cloud.stream.binders.kafka1.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -87,10 +142,10 @@ public class KafkaBinderMeterRegistryTest {
.tag("topic", "inputTopic").gauge().value()).isNotNull();
// assert consumer metrics
assertThatCode(() -> meterRegistry.get("kafka.consumer.connection.count").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.consumer.fetch.manager.fetch.total").meter()).doesNotThrowAnyException();
// assert producer metrics
assertThatCode(() -> meterRegistry.get("kafka.producer.connection.count").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.producer.io.ratio").meter()).doesNotThrowAnyException();
}
@SpringBootApplication

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -18,12 +18,14 @@ package org.springframework.cloud.stream.binder.kafka.integration;
import java.util.List;
import java.util.Map;
import java.util.function.Consumer;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
@@ -33,19 +35,14 @@ import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.FilteredClassLoader;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.context.runner.ApplicationContextRunner;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.PollableMessageSource;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.config.ConsumerEndpointCustomizer;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
import org.springframework.cloud.stream.config.ProducerMessageHandlerCustomizer;
import org.springframework.cloud.stream.messaging.Processor;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter;
import org.springframework.integration.kafka.inbound.KafkaMessageSource;
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
@@ -63,13 +60,18 @@ import static org.assertj.core.api.Assertions.assertThat;
* @author Oleg Zhurakousky
* @author Jon Schneider
* @author Gary Russell
* @author Soby Chacko
*
* @since 2.0
*/
@RunWith(SpringRunner.class)
// @checkstyle:off
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = "spring.cloud.stream.bindings.input.group="
+ KafkaBinderActuatorTests.TEST_CONSUMER_GROUP)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
properties = {
"spring.cloud.stream.bindings.input.group=" + KafkaBinderActuatorTests.TEST_CONSUMER_GROUP,
"spring.cloud.stream.function.bindings.process-in-0=input",
"spring.cloud.stream.pollable-source=input"}
)
// @checkstyle:on
@DirtiesContext
public class KafkaBinderActuatorTests {
@@ -100,17 +102,22 @@ public class KafkaBinderActuatorTests {
@Test
public void testKafkaBinderMetricsExposed() {
this.kafkaTemplate.send(Sink.INPUT, null, "foo".getBytes());
this.kafkaTemplate.send("input", null, "foo".getBytes());
this.kafkaTemplate.flush();
assertThat(this.meterRegistry.get("spring.cloud.stream.binder.kafka.offset")
.tag("group", TEST_CONSUMER_GROUP).tag("topic", Sink.INPUT).gauge()
.tag("group", TEST_CONSUMER_GROUP).tag("topic", "input").gauge()
.value()).isGreaterThan(0);
}
@Test
@Ignore
public void testKafkaBinderMetricsWhenNoMicrometer() {
new ApplicationContextRunner().withUserConfiguration(KafkaMetricsTestConfig.class)
.withPropertyValues(
"spring.cloud.stream.bindings.input.group", KafkaBinderActuatorTests.TEST_CONSUMER_GROUP,
"spring.cloud.stream.function.bindings.process-in-0", "input",
"spring.cloud.stream.pollable-source", "input")
.withClassLoader(new FilteredClassLoader("io.micrometer.core"))
.run(context -> {
assertThat(context.getBeanNamesForType(MeterRegistry.class))
@@ -148,8 +155,8 @@ public class KafkaBinderActuatorTests {
});
}
@EnableBinding({ Processor.class, PMS.class })
@EnableAutoConfiguration
@Configuration
public static class KafkaMetricsTestConfig {
@Bean
@@ -172,19 +179,18 @@ public class KafkaBinderActuatorTests {
return (handler, destinationName) -> handler.setBeanName("setByCustomizer:" + destinationName);
}
@StreamListener(Sink.INPUT)
public void process(@SuppressWarnings("unused") String payload) throws InterruptedException {
@Bean
public Consumer<String> process() {
// Artificial slow listener to emulate consumer lag
Thread.sleep(1000);
return s -> {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
//no-op
}
};
}
}
public interface PMS {
@Input
PollableMessageSource source();
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,6 +21,7 @@ import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.common.TopicPartition;
@@ -33,10 +34,6 @@ import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
@@ -47,10 +44,9 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerPro
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
@@ -62,6 +58,11 @@ import static org.assertj.core.api.Assertions.assertThat;
*/
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.function.definition=process;processCustom",
"spring.cloud.stream.function.bindings.process-in-0=standard-in",
"spring.cloud.stream.function.bindings.process-out-0=standard-out",
"spring.cloud.stream.function.bindings.processCustom-in-0=custom-in",
"spring.cloud.stream.function.bindings.processCustom-out-0=custom-out",
"spring.cloud.stream.kafka.bindings.standard-out.producer.configuration.key.serializer=FooSerializer.class",
"spring.cloud.stream.kafka.default.producer.configuration.key.serializer=BarSerializer.class",
"spring.cloud.stream.kafka.default.producer.configuration.value.serializer=BarSerializer.class",
@@ -167,22 +168,19 @@ public class KafkaBinderExtendedPropertiesTest {
Boolean.TRUE);
}
@EnableBinding(CustomBindingForExtendedPropertyTesting.class)
@EnableAutoConfiguration
@Configuration
public static class KafkaMetricsTestConfig {
@StreamListener("standard-in")
@SendTo("standard-out")
public String process(String payload) {
return payload;
}
@StreamListener("custom-in")
@SendTo("custom-out")
public String processCustom(String payload) {
return payload;
@Bean
public Function<String, String> process() {
return payload -> payload;
}
@Bean
public Function<String, String> processCustom() {
return payload -> payload;
}
@Bean
public RebalanceListener rebalanceListener() {
return new RebalanceListener();
@@ -190,22 +188,6 @@ public class KafkaBinderExtendedPropertiesTest {
}
interface CustomBindingForExtendedPropertyTesting {
@Input("standard-in")
SubscribableChannel standardIn();
@Output("standard-out")
MessageChannel standardOut();
@Input("custom-in")
SubscribableChannel customIn();
@Output("custom-out")
MessageChannel customOut();
}
public static class RebalanceListener implements KafkaBindingRebalanceListener {
private final Map<String, Boolean> bindings = new HashMap<>();
@@ -215,23 +197,18 @@ public class KafkaBinderExtendedPropertiesTest {
@Override
public void onPartitionsRevokedBeforeCommit(String bindingName,
Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
@Override
public void onPartitionsRevokedAfterCommit(String bindingName,
Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
@Override
public void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions, boolean initial) {
this.bindings.put(bindingName, initial);
this.latch.countDown();
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2017 the original author or authors.
* Copyright 2016-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.integration;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -27,12 +28,12 @@ import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.context.TestConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.function.StreamBridge;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.KafkaNull;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
@@ -47,21 +48,19 @@ import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Aldo Sinanaj
* @author Gary Russell
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.kafka.consumer.auto-offset-reset=earliest" })
"spring.kafka.consumer.auto-offset-reset=earliest",
"spring.cloud.stream.function.bindings.inputListen-in-0=kafkaNullInput"})
@DirtiesContext
@Ignore
public class KafkaNullConverterTest {
private static final String KAFKA_BROKERS_PROPERTY = "spring.kafka.bootstrap-servers";
@Autowired
private MessageChannel kafkaNullOutput;
@Autowired
private MessageChannel kafkaNullInput;
private ApplicationContext context;
@Autowired
private KafkaNullConverterTestConfig config;
@@ -81,8 +80,11 @@ public class KafkaNullConverterTest {
}
@Test
@Ignore
public void testKafkaNullConverterOutput() throws InterruptedException {
this.kafkaNullOutput.send(new GenericMessage<>(KafkaNull.INSTANCE));
final StreamBridge streamBridge = context.getBean(StreamBridge.class);
streamBridge.send("kafkaNullOutput", new GenericMessage<>(KafkaNull.INSTANCE));
assertThat(this.config.countDownLatchOutput.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.config.outputPayload).isNull();
@@ -90,14 +92,17 @@ public class KafkaNullConverterTest {
@Test
public void testKafkaNullConverterInput() throws InterruptedException {
this.kafkaNullInput.send(new GenericMessage<>(KafkaNull.INSTANCE));
final MessageChannel kafkaNullInput = context.getBean("kafkaNullInput", MessageChannel.class);
kafkaNullInput.send(new GenericMessage<>(KafkaNull.INSTANCE));
assertThat(this.config.countDownLatchInput.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.config.inputPayload).isNull();
}
@TestConfiguration
@EnableBinding(KafkaNullTestChannels.class)
@EnableAutoConfiguration
@Configuration
public static class KafkaNullConverterTestConfig {
final CountDownLatch countDownLatchOutput = new CountDownLatch(1);
@@ -114,22 +119,13 @@ public class KafkaNullConverterTest {
countDownLatchOutput.countDown();
}
@StreamListener("kafkaNullInput")
public void inputListen(@Payload(required = false) byte[] payload) {
this.inputPayload = payload;
countDownLatchInput.countDown();
@Bean
public Consumer<byte[]> inputListen() {
return in -> {
this.inputPayload = in;
countDownLatchInput.countDown();
};
}
}
public interface KafkaNullTestChannels {
@Input
MessageChannel kafkaNullInput();
@Output
MessageChannel kafkaNullOutput();
}
}

View File

@@ -0,0 +1,117 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.integration;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.kafka.ListenerContainerWithDlqAndRetryCustomizer;
import org.springframework.cloud.stream.binding.BindingsLifecycleController;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.KafkaOperations;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.support.ExponentialBackOffWithMaxRetries;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.lang.Nullable;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.util.backoff.BackOff;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
/**
* @author Gary Russell
*/
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.function.definition=retryInBinder;retryInContainer",
"spring.cloud.stream.bindings.retryInBinder-in-0.group=foo",
"spring.cloud.stream.bindings.retryInContainer-in-0.group=bar",
"spring.cloud.stream.kafka.bindings.retryInBinder-in-0.consumer.enable-dlq=true",
"spring.cloud.stream.kafka.bindings.retryInContainer-in-0.consumer.enable-dlq=true"})
@EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
@DirtiesContext
public class KafkaRetryDlqBinderOrContainerTests {
@Test
public void retryAndDlqInRightPlace(@Autowired BindingsLifecycleController controller) {
Binding<?> retryInBinder = controller.queryState("retryInBinder-in-0");
assertThat(KafkaTestUtils.getPropertyValue(retryInBinder, "lifecycle.retryTemplate")).isNotNull();
assertThat(KafkaTestUtils.getPropertyValue(retryInBinder,
"lifecycle.messageListenerContainer.commonErrorHandler")).isNull();
Binding<?> retryInContainer = controller.queryState("retryInContainer-in-0");
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer, "lifecycle.retryTemplate")).isNull();
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer,
"lifecycle.messageListenerContainer.commonErrorHandler")).isInstanceOf(CommonErrorHandler.class);
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer,
"lifecycle.messageListenerContainer.commonErrorHandler.failureTracker.backOff"))
.isInstanceOf(ExponentialBackOffWithMaxRetries.class);
}
@SpringBootApplication
public static class ConfigCustomizerTestConfig {
@Bean
public Consumer<String> retryInBinder() {
return str -> { };
}
@Bean
public Consumer<String> retryInContainer() {
return str -> { };
}
@Bean
ListenerContainerWithDlqAndRetryCustomizer cust() {
return new ListenerContainerWithDlqAndRetryCustomizer() {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff) {
if (destinationName.contains("Container")) {
ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(mock(KafkaOperations.class),
dlqDestinationResolver);
container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff));
}
}
@Override
public boolean retryAndDlqInBinding(String destinationName, String group) {
return !destinationName.contains("Container");
}
};
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -35,11 +35,12 @@ import org.springframework.beans.factory.BeanCreationException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.cloud.stream.function.StreamBridge;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
@@ -58,6 +59,7 @@ import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Gary Russell
* @author Soby Chacko
* @since 2.1.4
*
*/
@@ -80,7 +82,7 @@ public class ProducerOnlyTransactionTests {
private Sender sender;
@Autowired
private MessageChannel output;
private ApplicationContext context;
@BeforeClass
public static void setup() {
@@ -95,7 +97,8 @@ public class ProducerOnlyTransactionTests {
@Test
public void testProducerTx() {
this.sender.DoInTransaction(this.output);
final StreamBridge streamBridge = context.getBean(StreamBridge.class);
this.sender.DoInTransaction(streamBridge);
assertThat(this.sender.isInTx()).isTrue();
Map<String, Object> props = KafkaTestUtils.consumerProps("consumeTx", "false",
embeddedKafka.getEmbeddedKafka());
@@ -109,9 +112,9 @@ public class ProducerOnlyTransactionTests {
assertThat(record.value()).isEqualTo("foo".getBytes());
}
@EnableBinding(Source.class)
@EnableAutoConfiguration
@EnableTransactionManagement
@Configuration
public static class Config {
@Bean
@@ -140,9 +143,9 @@ public class ProducerOnlyTransactionTests {
private boolean isInTx;
@Transactional
public void DoInTransaction(MessageChannel output) {
public void DoInTransaction(StreamBridge streamBridge) {
this.isInTx = TransactionSynchronizationManager.isActualTransactionActive();
output.send(new GenericMessage<>("foo"));
streamBridge.send("output", new GenericMessage<>("foo".getBytes()));
}
public boolean isInTx() {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,6 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.integration.topic.configs;
import java.util.function.Function;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -23,24 +25,21 @@ import org.junit.runner.RunWith;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
/**
* @author Heiko Does
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@SpringBootTest(
classes = BaseKafkaBinderTopicPropertiesUpdateTest.TopicAutoConfigsTestConfig.class,
webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.function.bindings.process-in-0=standard-in",
"spring.cloud.stream.function.bindings.process-out-0=standard-out",
"spring.cloud.stream.kafka.bindings.standard-out.producer.topic.properties.retention.ms=9001",
"spring.cloud.stream.kafka.default.producer.topic.properties.retention.ms=-1",
"spring.cloud.stream.kafka.bindings.standard-in.consumer.topic.properties.retention.ms=9001",
@@ -65,24 +64,12 @@ public abstract class BaseKafkaBinderTopicPropertiesUpdateTest {
System.clearProperty(KAFKA_BROKERS_PROPERTY);
}
@EnableBinding(CustomBindingForTopicPropertiesUpdateTesting.class)
@EnableAutoConfiguration
public static class TopicAutoConfigsTestConfig {
@StreamListener("standard-in")
@SendTo("standard-out")
public String process(String payload) {
return payload;
@Bean
public Function<String, String> process() {
return payload -> payload;
}
}
interface CustomBindingForTopicPropertiesUpdateTesting {
@Input("standard-in")
SubscribableChannel standardIn();
@Output("standard-out")
MessageChannel standardOut();
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,6 +21,7 @@ import java.util.List;
import java.util.Set;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import kafka.server.KafkaConfig;
import org.junit.AfterClass;
@@ -33,13 +34,10 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.messaging.Processor;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
@@ -49,8 +47,6 @@ import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.kafka.transaction.KafkaAwareTransactionManager;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.backoff.FixedBackOff;
@@ -61,6 +57,7 @@ import static org.mockito.Mockito.mock;
/**
* @author Gary Russell
* @author Soby Chacko
* @since 3.0
*
*/
@@ -69,6 +66,11 @@ import static org.mockito.Mockito.mock;
"spring.kafka.consumer.properties.isolation.level=read_committed",
"spring.kafka.consumer.enable-auto-commit=false",
"spring.kafka.consumer.auto-offset-reset=earliest",
"spring.cloud.function.definition=listenIn;listenIn2",
"spring.cloud.stream.function.bindings.listenIn-in-0=input",
"spring.cloud.stream.function.bindings.listenIn-out-0=output",
"spring.cloud.stream.function.bindings.listenIn2-in-0=input2",
"spring.cloud.stream.function.bindings.listenIn2-out-0=output2",
"spring.cloud.stream.bindings.input.destination=consumer.producer.txIn",
"spring.cloud.stream.bindings.input.group=consumer.producer.tx",
"spring.cloud.stream.bindings.input.consumer.max-attempts=1",
@@ -91,6 +93,9 @@ public class ConsumerProducerTransactionTests {
@Autowired
private Config config;
@Autowired
private ApplicationContext context;
@BeforeClass
public static void setup() {
System.setProperty(KAFKA_BROKERS_PROPERTY,
@@ -115,26 +120,22 @@ public class ConsumerProducerTransactionTests {
public void externalTM() {
assertThat(this.config.input2Container.getContainerProperties().getTransactionManager())
.isSameAs(this.config.tm);
Object handler = KafkaTestUtils.getPropertyValue(this.config.output2, "dispatcher.handlers", Set.class)
final MessageChannel output2 = context.getBean("output2", MessageChannel.class);
Object handler = KafkaTestUtils.getPropertyValue(output2, "dispatcher.handlers", Set.class)
.iterator().next();
assertThat(KafkaTestUtils.getPropertyValue(handler, "delegate.kafkaTemplate.producerFactory"))
.isSameAs(this.config.pf);
}
@EnableBinding(TwoProcessors.class)
@EnableAutoConfiguration
@Configuration
public static class Config {
final List<String> outs = new ArrayList<>();
final CountDownLatch latch = new CountDownLatch(2);
@Autowired
private MessageChannel output;
@Autowired
MessageChannel output2;
AbstractMessageListenerContainer<?, ?> input2Container;
ProducerFactory pf;
@@ -147,16 +148,19 @@ public class ConsumerProducerTransactionTests {
this.latch.countDown();
}
@StreamListener(Processor.INPUT)
public void listenIn(String in) {
this.output.send(new GenericMessage<>(in.toUpperCase()));
if (in.equals("two")) {
throw new RuntimeException("fail");
}
@Bean
public Function<String, String> listenIn() {
return in -> {
if (in.equals("two")) {
throw new RuntimeException("fail");
}
return in.toUpperCase();
};
}
@StreamListener("input2")
public void listenIn2(String in) {
@Bean
public Function<String, String> listenIn2() {
return in -> in;
}
@Bean
@@ -187,17 +191,6 @@ public class ConsumerProducerTransactionTests {
this.tm = mock;
return mock;
}
}
public interface TwoProcessors extends Processor {
@Input
SubscribableChannel input2();
@Output
MessageChannel output2();
}
}