Compare commits

..

446 Commits

Author SHA1 Message Date
buildmaster
9922922036 Update SNAPSHOT to 4.0.0-M1 2022-01-27 15:37:22 +00:00
Oleg Zhurakousky
c3b6610c9f Update SK and SIK version 2022-01-27 16:24:05 +01:00
Oleg Zhurakousky
d2c12873d0 Revert "Fixed invalid java code snippet"
This reverts commit 4cbcb4049b.
2022-01-27 16:16:26 +01:00
Oleg Zhurakousky
01dbb49313 Revert "Update Kafka client version to 3.1.0"
This reverts commit a25e2ea0b3.
2022-01-27 16:16:10 +01:00
Oleg Zhurakousky
65960e101f Revert "Update documented version of kafka-clients"
This reverts commit 69e377e13b.
2022-01-27 16:16:02 +01:00
Jay Lindquist
69e377e13b Update documented version of kafka-clients
It looks like this has been incorrect for a few versions. The main branch is currently pulling in 3.1.0

https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/blob/main/pom.xml#L26
2022-01-26 11:53:37 -05:00
Soby Chacko
a25e2ea0b3 Update Kafka client version to 3.1.0
Test changes
2022-01-25 17:18:04 -05:00
Rex Ijiekhuamen
4cbcb4049b Fixed invalid java code snippet 2022-01-24 10:22:32 -05:00
Soby Chacko
310683987a Temporarily disabling a test 2022-01-19 12:46:23 -05:00
Soby Chacko
cd02be57e5 Test package changes 2022-01-18 15:42:47 -05:00
Soby Chacko
ee888a15ba Fixing test 2022-01-18 12:59:47 -05:00
Soby Chacko
417665773c Merge branch '4.x' into main 2022-01-18 11:50:45 -05:00
Soby Chacko
d345ac88b1 Enable custom binder health check impelementation
Currently, KafkaBinderHealthIndicator is not customizable and included by default
when Spring Boot actuator is on the classpath. Fix this by allowing the application
to provide a custom implementation. A new marker interface called KafkaBinderHealth
can be used by the applicaiton to provide a custom HealthIndicator implementation, in
which case, the binder's default implementation will be excluded.

Tests and docs changes.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1180
2022-01-13 12:06:37 -05:00
Soby Chacko
577ffbb67f Enable custom binder health check impelementation
Currently, KafkaBinderHealthIndicator is not customizable and included by default
when Spring Boot actuator is on the classpath. Fix this by allowing the application
to provide a custom implementation. A new marker interface called KafkaBinderHealth
can be used by the applicaiton to provide a custom HealthIndicator implementation, in
which case, the binder's default implementation will be excluded.

Tests and docs changes.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1180
2022-01-13 09:56:32 -05:00
Soby Chacko
31b91f47e4 Fixing InteractiveQueryService test.
Fixing checkstyle issues.
2022-01-12 13:03:12 -05:00
Soby Chacko
df9d04fd12 Retries for HostInfo in InteractiveQueryService
InteractiveQueryService methods for finding the host info for Kafka Streams
currently throw exceptions if the underlying KafkaStreams are not ready yet.
Introduce a retry mechanism so that the users can control the behaviour of
these methods by providing the following properties.

spring.cloud.stream.kafka.streams.binder.stateStoreRetry.maxAttempts (default 1)
spring.cloud.stream.kafka.streams.binder.stateStoreRetry.backoffPeriod (default 1000 ms).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1185
2022-01-11 18:57:00 -05:00
Soby Chacko
3770db7844 Retries for HostInfo in InteractiveQueryService
InteractiveQueryService methods for finding the host info for Kafka Streams
currently throw exceptions if the underlying KafkaStreams are not ready yet.
Introduce a retry mechanism so that the users can control the behaviour of
these methods by providing the following properties.

spring.cloud.stream.kafka.streams.binder.stateStoreRetry.maxAttempts (default 1)
spring.cloud.stream.kafka.streams.binder.stateStoreRetry.backoffPeriod (default 1000 ms).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1185
2022-01-11 18:49:23 -05:00
Eduard Domínguez
e512b7a2c6 Fix: KeySerde setup not using expected key type headers
checkstyle fixes
2022-01-11 14:54:33 -05:00
Eduard Domínguez
406e20f19c Fix: KeySerde setup not using expected key type headers
checkstyle fixes
2022-01-11 14:37:20 -05:00
Soby Chacko
da9bc354e4 StreamListener docs cleanup.
Fixing Kafka Streams composition tests due to an application.id issue.
2022-01-07 20:04:55 -05:00
Soby Chacko
3cc3680f63 Event type routing improvements (Kafka Streams)
When routing by event types, the deserializer omits the
topic and header information. Fixing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1186
2022-01-05 19:33:35 -05:00
Soby Chacko
648188fc6b Event type routing improvements (Kafka Streams)
When routing by event types, the deserializer omits the
topic and header information. Fixing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1186
2022-01-05 19:30:36 -05:00
Soby Chacko
d1a9eab14b Default maven-antrun version 2022-01-04 14:53:40 -05:00
Soby Chacko
1cdfb962c9 checkstyle fix 2022-01-03 19:00:19 -05:00
Soby Chacko
fb03d2ae8e Version upgrades
4.0.0-SNAPSHOT
  Spring Kafka - 3.0.0-SNAPSHOT
  Spring Integraton Kafka - 6.0.0-SNAPSHOT
  Spring Cloud Stream - 4.0.0-SNAPSHOT

Code changes for Jakarta
2022-01-03 19:00:12 -05:00
Eduard Domínguez
921b47d1e4 GH-1176: KeyValueSerdeResolver improvements
Use extended properties when initializing Consumer and Producer Serdes.

Updated copyright years and authors.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1176
2022-01-03 19:00:05 -05:00
Soby Chacko
be474f643a KafkaStreams binder health check improvements
Allow health checks on KafkaStreams processors that are currently stopped through
actuator bindings endpoint. Add this only as an opt-in feature through a new binder
level property - includeStoppedProcessorsForHealthCheck which is false by default
to preserve the current health indicator behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
Resolves #1175
2022-01-03 18:59:57 -05:00
Soby Chacko
0be87c3666 New tips-tricks-recipes section in docs
Migrate the recipe section in Spring Cloud Stream Samples
repository as Tips, Tricks and Receipes in Kafka binder main docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1173
Resolves #1174
2022-01-03 18:59:38 -05:00
Soby Chacko
e7bf404fce Fix PartitioningInterceptor CCE
The newly added DefaultPartitioningInteceptor must be explicitly
checked in order to avoid a CCE.

Related to resolving https://github.com/spring-cloud/spring-cloud-stream/issues/2245

Specifically for this: https://github.com/spring-cloud/spring-cloud-stream/issues/2245#issuecomment-977663452
2022-01-03 18:59:30 -05:00
Soby Chacko
e9a8b4af7e GH-1170: Schema registry certificates
Move classpath: resources provided as schema registry certificates
into a local file system location.

Adding test and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1170
2022-01-03 18:59:23 -05:00
Pommerening, Nico
f9dfbe09f7 GH-1161: InteractiveQueryService improvements
This PR safe guards state store instances in case there are multiple KafkaStreams
instances present that have distinct application IDs but share State Store Names.

Change is backwards compatible: In case no KafkaStreams association of the thread
can be found, all local state stores are queried as before.

In case an associated KafkaStreams Instance is found, but required StateStore is
not found in this instance, a warning is issued but backwards compatibility is
preserved by looking up all state stores.

Store within KafkaStreams instance of thread is preferred over "foreign" store with same name.

Warning is issued if requested store is not found within KafkaStreams instance of thread.

The main benefit here is to get rid of randomly selecting stores across all KafkaStreams instances
in case a store is contained within multiple streams instances with same name.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1161
2022-01-03 18:59:15 -05:00
Eduard Domínguez
63b306d34c GH-1176: KeyValueSerdeResolver improvements
Use extended properties when initializing Consumer and Producer Serdes.

Updated copyright years and authors.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1176
2021-12-10 14:02:29 -05:00
buildmaster
5cd8e06ec6 Bumping versions to 3.2.2-SNAPSHOT after release 2021-12-01 16:58:07 +00:00
buildmaster
79be11c9e9 Going back to snapshots 2021-12-01 16:58:07 +00:00
buildmaster
fc4358ba10 Update SNAPSHOT to 3.2.1 2021-12-01 16:55:37 +00:00
buildmaster
f3d2287b70 Bumping versions to 3.2.1-SNAPSHOT after release 2021-12-01 13:16:23 +00:00
buildmaster
220ae98bcc Going back to snapshots 2021-12-01 13:16:23 +00:00
buildmaster
bd3eebd897 Update SNAPSHOT to 3.2.0 2021-12-01 13:14:14 +00:00
Soby Chacko
ed8683dcc2 KafkaStreams binder health check improvements
Allow health checks on KafkaStreams processors that are currently stopped through
actuator bindings endpoint. Add this only as an opt-in feature through a new binder
level property - includeStoppedProcessorsForHealthCheck which is false by default
to preserve the current health indicator behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
Resolves #1175
2021-12-01 10:51:39 +01:00
Soby Chacko
60b6604988 New tips-tricks-recipes section in docs
Migrate the recipe section in Spring Cloud Stream Samples
repository as Tips, Tricks and Receipes in Kafka binder main docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1173
Resolves #1174
2021-12-01 10:36:31 +01:00
Oleg Zhurakousky
a3e76282b4 Merge pull request #1172 from sobychacko/fix-partitioning-interceptor
Fix PartitioningInterceptor CCE
2021-11-29 18:12:59 +01:00
Soby Chacko
c9687189b7 Fix PartitioningInterceptor CCE
The newly added DefaultPartitioningInteceptor must be explicitly
checked in order to avoid a CCE.

Related to resolving https://github.com/spring-cloud/spring-cloud-stream/issues/2245

Specifically for this: https://github.com/spring-cloud/spring-cloud-stream/issues/2245#issuecomment-977663452
2021-11-24 13:16:00 -05:00
Soby Chacko
5fcdf28776 GH-1170: Schema registry certificates
Move classpath: resources provided as schema registry certificates
into a local file system location.

Adding test and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1170
2021-11-23 18:23:55 -05:00
Soby Chacko
d334359cd4 Update spring-kafka to 2.8.0 Release 2021-11-23 10:22:18 -05:00
Pommerening, Nico
aff0dc00ef GH-1161: InteractiveQueryService improvements
This PR safe guards state store instances in case there are multiple KafkaStreams
instances present that have distinct application IDs but share State Store Names.

Change is backwards compatible: In case no KafkaStreams association of the thread
can be found, all local state stores are queried as before.

In case an associated KafkaStreams Instance is found, but required StateStore is
not found in this instance, a warning is issued but backwards compatibility is
preserved by looking up all state stores.

Store within KafkaStreams instance of thread is preferred over "foreign" store with same name.

Warning is issued if requested store is not found within KafkaStreams instance of thread.

The main benefit here is to get rid of randomly selecting stores across all KafkaStreams instances
in case a store is contained within multiple streams instances with same name.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1161
2021-11-18 14:08:22 -05:00
Oleg Zhurakousky
ba122bd39d Changes related to GH-2245 from core 2021-11-16 14:52:58 +01:00
Oleg Zhurakousky
7840decc86 Changes related to GH-2245 from core 2021-11-16 14:43:43 +01:00
Soby Chacko
486469da51 Kafka binder test migration
- EnableBinding to functional
2021-11-11 16:34:22 -05:00
Soby Chacko
ed98f1129d Kafka Streams binder deprecated component removals 2021-11-09 14:07:23 -05:00
Soby Chacko
c32be995f6 4.0.x changes for Kafka Streams tests
Migrating StreamListener based Kafka Streams binder tests to
use the funcitonal model
2021-11-08 19:21:49 -05:00
buildmaster
d37cbc79d7 Going back to snapshots 2021-11-03 09:32:29 +00:00
buildmaster
7a03eeed02 Update SNAPSHOT to 3.2.0-RC1 2021-11-03 09:31:02 +00:00
Oleg Zhurakousky
3a88839a5f Disable 'testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest' temporary 2021-11-03 10:17:03 +01:00
Soby Chacko
c9a07729dd Version updates and compile fixes
Spring Kafka: 2.8.0-RC1
Spring Integration: 5.5.5
Kafka Clients: 3.0.0

Remove schema registry references.
Updates for removed classes/deprecations in Kafka Streams client.
2021-11-01 22:11:06 -04:00
Soby Chacko
07f10f6eb5 Cleaning up
* Disconnect spring-cloud-scheam-registry-client from Kafka Streams binder
  that is used for testing
* Deprecate MessageConverterDelegateSerde
* Remove MessageConverterDelegateSerdeTest
2021-10-28 19:46:46 -04:00
Gary Russell
6fdc663349 GH-1031: Retry/DLQ in Binder or Container
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1031

Provide a mechanism to provide retry/dlq configuration when adding a custom error handler,
effectively moving this functionality from the binder to the container.

Address PR comments; fix test to use embedded broker.
2021-10-25 15:10:21 -04:00
buildmaster
f3a954fad7 Going back to snapshots 2021-10-19 09:29:32 +00:00
buildmaster
7be0f1be23 Update SNAPSHOT to 3.2.0-M3 2021-10-19 09:28:11 +00:00
Oleg Zhurakousky
0b687ad0ab Update SK to 2.8.0-RC1 2021-10-19 11:16:55 +02:00
Soby Chacko
c0bece64bd README cleanup
The overview doc gets unncessarily copied to the GitHub
repository root README. This only needs to reside in the
reference manual docs. Cleaning the process that generates
the root README for the repository.
2021-10-12 17:20:33 -04:00
Gary Russell
2efd29fb27 GH-1138: HealthIndicator Improvements
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1138

Don't report DOWN if a container is stopped normally.
This is a valid state when containers are not auto-startup or are stopped while
the app remains running.

Containers are stopped abnormally when
- a listener throws an `Error`
- a `CommonContainerStoppingErrorHandler` (or similar) is configured to stop the
  container after an error.
2021-10-04 17:13:01 -04:00
Soby Chacko
82a3306cb9 GH-1157: Issues with Kafka Streams and Kotlin
Kafka Streams binder erroneously tries to parse regular
non Kafka streams Kotlin function registrations. Ignore
function beans ending in _registration in Kafka Streams binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1157
2021-10-01 11:09:58 -04:00
buildmaster
af0485e241 Going back to snapshots 2021-10-01 14:13:49 +00:00
buildmaster
ea8912b011 Update SNAPSHOT to 3.2.0-M2 2021-10-01 14:12:23 +00:00
Oleg Zhurakousky
d76d916970 Upgrade spring-kafka 2021-10-01 13:48:08 +02:00
Soby Chacko
ac0e462ed2 Doc clarification for Kafka Streams binder prefix
Polishing the docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1134
2021-09-28 18:07:52 -04:00
Soby Chacko
bd1b49222c GH-1156: Kafka Streams binder composition issues
When both regular Kafka and Kafka Streams functions are present,
the code that was added recently for function composition in
Kafka Streams binder was accidentally creating a binadable proxy
factory bean for non Kafka Streams functions. Resolving this issue.

Resovles https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1156
2021-09-28 15:43:06 -04:00
Soby Chacko
9fd16416d6 GH-1149: Kafka Streams global config issues
When there are multiple functions, streamConfigGlobalProperties are overriden
for subsequent functions, after a binding specific config takes effect in a function.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1149
2021-09-24 10:01:26 -04:00
bono007
a4c38b3453 GH-1152: Property binding in Kafka Streams binder
Add default mappings provider for Kafka Streams (move kafka streams default mapping to new provider)

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1152
2021-09-23 13:19:35 -04:00
Soby Chacko
a8c948a6b2 GH-1148: Native changes required
KafkaBinderConfiguration class needs more pruning for native compilation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1148
2021-09-21 18:22:33 -04:00
Soby Chacko
d57091d791 GH-1145: Remove destroying producer factory
Remove the un-ncessary call to destroy the producer when checking for partitions.
This way, the producer is cached and reused at the first time data is produced.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1145
2021-09-16 14:46:37 -04:00
Gary Russell
2b7f6ecb96 Fix Doc Typos for KafkaBindingRebalanceListener 2021-09-15 17:35:17 -04:00
Soby Chacko
53d32c2332 GH-1140: CommonErrorHandler per consumer binding (#1143)
* GH-1140: CommonErrorHandler per consumer binding

Setting CommonErrorHandler on consumer binding through its bean name.
If present, binder will resolve this bean and assign it on the listener
container.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1140

* Addressing PR review
2021-09-14 10:17:32 -04:00
Soby Chacko
b7ebc185e7 GH-1141: Streams cleanup docs
Clarify streams cleanup default in the docs.
Effective from Spring Kafka 2.7, no cleanup will be performed on shutdown.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1141
2021-09-13 16:35:36 -04:00
Soby Chacko
56e25383f8 Fix wrong property in docs
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1137
2021-09-03 15:38:50 -04:00
Soby Chacko
b4c7c36229 GH-1135: Disable container retries when no DLQ set (#1136)
* GH-1135: Disable container retries when no DLQ set

Disable default container retries when binding retries are enabled
(maxAttempts > 1) and no DLQ set.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1135

* Address PR review comments

* Addressing PR review
2021-09-01 13:29:36 -04:00
Soby Chacko
6eed115cc9 Kafka Streams binder tests cleanup 2021-08-27 20:03:56 -04:00
Soby Chacko
8e6d07cc7b Update Kafka Streams branching docs/tests
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1133
2021-08-27 18:16:42 -04:00
Soby Chacko
adde49aab3 Refactoring Kafka Streams join tests
Coalesce the stream join tests in Kafka Streams binder around the functional model.
Remove duplicating the same join tests using the StreamListener model.
2021-08-25 15:45:38 -04:00
Soby Chacko
e500138486 Consumer/Producer prefix in Kafka Streams binder (#1131)
* Consumer/Producer prefix in Kafka Streams binder

Kafka Streams allows the applications to separate the consumer and producer
properties using consumer/producer prefixes. Add these prefixes automatically
if they are missing from the application.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1065

* Addressing PR review comments
2021-08-25 13:57:42 -04:00
Soby Chacko
970bac41bb Update pom.xml
Make the maven configuraiton consistent with core Spring Cloud Stream
2021-08-24 16:54:19 -04:00
Soby Chacko
afe39bf78a Kafka Streams binding lifecycle changes
Start Kafka Streams bindgings only when they are not running.
Similarly, stop them only if they are running.

Without these guards in the bindings for KStream, KTable and GlobalKTable,
it may cause NPE's due to the backing concurrent collections in KafkaStreamsRegistry
not finding the proper KafkaStreams object, especially when the StreamsBuilderFactory
bean is already stopped through the binder provided manager.
2021-08-24 16:53:23 -04:00
Derek Eskens
c1ad3006e9 Fixing consumer config typo in overview.adoc
ConsumerConfigCustomizer is the correct name of the interface.
2021-08-23 19:33:15 -04:00
Soby Chacko
ba2c3a05c9 GH-1129: Kafka Binder Metrics Improvements
Avoid blocking committed() call in KafkaBinderMetrics in a loop for
each topic partition.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1129
2021-08-23 16:14:45 -04:00
Soby Chacko
739b499966 Honoring auto startup in Kafka Streams binder (#1127)
* Honoring auto startup in Kafka Streams binder

When using Kafka Streams bineder, the processors are started
unconditionally, i.e. autoStartup is always true by default.
If spring.kafka.streams.auto-startup is set, then honor that
as the auto-startup flag in the binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1126

* Addressing PR review comments

Auto startup flag is honored per individual consumer bindings as well.

* Addressing PR review comments

* Making KafkaStreamsRegistry#kafkaStreams set to use a concurrent Set.
2021-08-23 14:18:05 -04:00
Soby Chacko
1b26f5d629 GH-1112: Fix OOM in DlqSender.sendToDlq
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1112
2021-08-20 14:57:15 -04:00
Soby Chacko
99c323e314 Document Kafka Streams and Sleuth integration
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1102
2021-08-16 20:45:08 -04:00
Soby Chacko
f4d3715317 Recycle KafkaStreams Objects
In the event Kafka Streams bindings are restarted (stop/start)
using the actuator bindings endpoints, the underlying KafkaStreams
objects are not recycled. After restarting, it still sees the previous
KafkaStreams object. Addressing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1119
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1120
2021-08-12 15:47:40 -04:00
Soby Chacko
d7907bbdcc Docs updates for Kakfa Streams skipAndContinue 2021-08-12 14:43:22 -04:00
buildmaster
9b5e735f74 Going back to snapshots 2021-07-30 15:12:01 +00:00
buildmaster
201668542b Update SNAPSHOT to 3.2.0-M1 2021-07-30 15:10:38 +00:00
Soby Chacko
a5f01f9d6f GH-1110: Kafka Streams state machine changes
Restore Kafka Streams error state behavior in the binder, equivalent
to Kafka Streams prior to 2.8. Starting with 2.8, users can customize
the way uncaught errors are interpreted via an UncaughtExceptionHandler
in the applicaiton. Binder now sets a default handler that shuts down
the Kafka Streams client if thre is an uncaught error.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
2021-07-29 13:35:08 -04:00
Soby Chacko
912c47e3ac Fix failing tests in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1109
2021-07-28 18:41:22 -04:00
Soby Chacko
001882de4e Upgrade versions
Spring Kafka: 2.8.0-M1
Spring Integration Kafka: 5.5.2
Kafka: 2.8.0

Ignore a few Kafka Streams binder tests temporarily.
2021-07-27 20:01:08 -04:00
Soby Chacko
54ac274ea3 GH-1085: Allow KTable binding on the outbound
At the moment, Kafka Streams binder only allows KStream bindings on the outbound.
There is a delegation mechanism in which we stil can use KStream for output binding
while allowing the applications to provide a KTable type as the function return type.

Update docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1085

Resolves #1105
2021-07-16 09:46:04 +02:00
Soby Chacko
80b707e5e9 Fixing Kafka binder tests.
* Adding a delay in testDlqWithNativeSerializationEnabledOnDlqProducer to avoid a race condition.
  Awaitility is used to wait for the proper condition in this test.

* In the MicroMeter registry tests, properly wait for the first message to arrive on the outbound
  so that the producer factory gets a chance to add the MicroMeter producer listener completely
  before the test assertions start.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1103
2021-07-14 18:54:16 -04:00
Soby Chacko
13474bdafb Fixing Kafka Binder tests 2021-07-13 19:39:04 -04:00
Soby Chacko
d0b4bdf438 Revert "Temporarily disabling KafkaBinderTests"
This reverts commit 0bedc606ce.
2021-07-13 15:44:28 -04:00
Soby Chacko
0bedc606ce Temporarily disabling KafkaBinderTests 2021-07-13 15:03:40 -04:00
Oleg Zhurakousky
b5cb32767b Fix POMs for 3.2 2021-07-13 16:50:01 +02:00
Oleg Zhurakousky
1a45ff797a JUnit 5 migration related to boot 2.6 2021-07-13 13:00:55 +02:00
Soby Chacko
f7537e795e Adding docs for multi binder JAAS configuration
Adding documentation for connecting to multiple Kafka clusters
with separate JAAS configuraiton from within a single application.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/944
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/874
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/924
2021-07-07 15:37:37 -04:00
Soby Chacko
7cae3aa54f GH-1096: Named components in Kafka Streams
Support KIP-307 in Kafka Streams binder where the input (source)
and output (sink) bindings are customized with user-provided names.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1096

Resolves #1098
2021-07-05 14:48:09 +02:00
Soby Chacko
5454d54faf Support Kafka Streams binder function composition
Composed function defintions can start with a `java.util.function.Function`
or `java.util.function.BiFunction` and compose with other functions or consumers.
In the case of a consumer, this needs to be the last unit in the function definition.

`java.util.function.BiConumer` is not eligibe for function composition.

The first component in the definition can be a curried function as well.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1088

Resolves #1091
2021-07-05 14:29:25 +02:00
Soby Chacko
4d68282937 Refactor JAAS initializer tests 2021-06-23 21:33:11 -04:00
Soby Chacko
9cf4cdc7c1 KafkaStreamsBinderBootstrapTest updates
Migrate tests from StreamListener to the functional model.
2021-06-23 18:30:56 -04:00
Soby Chacko
69a3c36db9 GH-1094: Fix JAAS initializer tests on CI
Before running the JAAS initilizer security tests, remove reference to
JAAS related  config files created previously in other tests. This is done
through clearing a system property (java.security.auth.login.config) in
the tests. Without clearing this property, these tests run into a race condition.

Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1094
2021-06-23 17:56:38 -04:00
Soby Chacko
b8be381828 Temporarily disabling Kafka Streams jaas test 2021-06-22 21:31:36 -04:00
Soby Chacko
162993dd46 Test fixes for Kafka Streams jass config tests
Provide different application id's.
2021-06-22 21:02:10 -04:00
Soby Chacko
ee3096658a GH-1092: Fix JAAS config issues (Kafka Streams)
Currently, Kafka Streams binder does not honor JAAS configuration properties.
Address this by adding the same `KafkaJaasLoginModuleInitializer` bean used in
regular Kafka binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1092
2021-06-22 17:16:33 -04:00
Soby Chacko
6e46054d7b Address deprecations for StreamsBuilderFactoryBeanCustomizer
StreamsBuilderFactoryBeanCustomizer is now renamed to StreamsBuilderFactoryBeanConfigurer
2021-06-08 16:56:47 -04:00
Gary Russell
7bc90c10a2 GH-1084: Add txCommitRecovered Property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1084

Update copyrights
2021-06-02 18:01:32 -04:00
buildmaster
5adeea2acb Bumping versions to 3.1.4-SNAPSHOT after release 2021-05-26 15:50:17 +00:00
buildmaster
65664a2d4c Going back to snapshots 2021-05-26 15:50:17 +00:00
buildmaster
f5e12a29c1 Update SNAPSHOT to 3.1.3 2021-05-26 15:48:31 +00:00
Soby Chacko
416580e607 Version updates 2021-05-26 10:38:33 -04:00
aleksevi
a81093734e GH-1081 Suspicious multiplication of ScheduledExecutorService in the method "bindTo" of class "KafkaBinderMetrics"
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1081

The wrong consequence from the absense of scheduler.shutdown():
1) First of all we created the pool with 1 thread.
2) After we lost the reference on it and created the pool with 2 threads.
3) But, the first pool is not yet collected by GC and now you have 3 threads together. And so on.

Each thread does nothing, but it takes system memory and takes part in the scheduling process.
After 30 topics, for example, we potentially have (30+1)*15=465 threads.
It is already serious additional load on the switching contexts and the native memory.

Removed waiting after scheduler stop request.

checkstyle fixes
2021-05-26 10:37:42 -04:00
Soby Chacko
5f4e3eebc5 Removing deprecated setAckOnError method call (#1082)
Going forward, binder will use the default ack on error settings
in Spring Kafka, which is true by default. If the applicaitons
need to change this behavior, then a custom error handler must
be provided through ListenerContainerCustomizer.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1079
2021-05-24 16:46:26 -04:00
Taras Danylchuk
cb42e80dac gh-1059 : Added health indicator for kafka messages listener containers created via kafka binder
renamed variables, return unknown on empty listener list

added tests, fixed PR comments

checkstyle fix
2021-05-17 15:20:23 -04:00
Soby Chacko
4fb5037fd7 Use dlqProducerProperties for DLQ topics
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1077
2021-05-14 11:54:01 -04:00
Soby Chacko
d8b8c8d9fd Resetting offsets docs clarification 2021-05-06 18:16:47 -04:00
Soby Chacko
9e4a1075d4 Native compilation changes for Kafka Streams binder
This comnmit effectively removes changes introduced by a4ad9e2c0b.
Removing spring.factories mechanism of registering binders in preference to using spring.binders.
2021-04-29 19:35:15 -04:00
Soby Chacko
79497ad264 Native compilation changes
This commit effectively reverts 4e6881830a

Remove spring.factories changes introduced in the commit mentiond above.
Revert the binder to use spring.binders mechanism to register binder configurations.
2021-04-29 19:08:01 -04:00
Oleg Zhurakousky
7e48afa005 Merge pull request #1067 from sobychacko/kafka-streams-native-1
Native changes required by Kafka Streams binder
2021-04-28 19:13:52 +02:00
Oleg Zhurakousky
87ac491230 Merge pull request #1066 from sobychacko/kafka-binder-native-1
More changes required for native compilation
2021-04-28 19:13:43 +02:00
Soby Chacko
a4ad9e2c0b Native changes required by Kafka Streams binder 2021-04-28 12:26:53 -04:00
Soby Chacko
4e6881830a More changes required for native compilation
These are Kafka binder changes required for native applications.
2021-04-28 12:22:51 -04:00
Soby Chacko
829ce1cf7e Native compilation changes
This commit is a counter part to the commit below in core:

ae017e222f
2021-04-22 11:35:08 -04:00
buildmaster
c5289589c1 Bumping versions 2021-03-27 08:52:52 +00:00
Soby Chacko
dd607627ed Fix destination as pattern issues in Kafka Streams (#1054)
When destination-is-pattern property is enabled and native deserialization
is used, then Kafka Streams binder does not use the correct Serde types.
Fixing this issue by providing the proper Serdes to the StreamsBuilder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1051
2021-03-25 16:20:33 -04:00
Gary Russell
0ea4315af8 GH-1043: Support Custom BatchMessageConverter
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1043
2021-03-24 13:23:53 -04:00
Soby Chacko
f25dbff2b7 Kafka Streams component bean scanning enhancements
When functional beans are discovered from libraries in the classpath,
it causes issues when Kafka Streams functions are scanned and bootstrapped.
Binder expects the users to provide function definition property although the
application does not directly include these functional beans or is aware of it.
Fixing this issue by excluding non kafka streams function from scanning.

Null check around adding micrometer listener to the StreamsBuilder.

Addressing issues raised by the comments here:
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1030#issuecomment-804039087
2021-03-22 19:06:07 -04:00
Gary Russell
f8b290844b GH-1046: Fix Out of Order Offset Commit with DLQ
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1046

When using ack mode `MANUAL_IMMEDIATE`, we must acknowledge the delivery on
the container thread, otherwise the commit will be queued for later processing,
possibly causing out of order commits, incorrectly reducing the committed offset.

Wait for the send to complete before acknowledging; use a timeout slightly larger
than the configured producer delivery timeout to avoid premature timeouts.

Tested with user-provided test case.

Fix missing timeout buffer.

More context for this issue: https://gitter.im/spring-cloud/spring-cloud-stream?at=6050f43595e23446e43cacd1
2021-03-19 15:36:29 -04:00
Gary Russell
a7e025794c Fix typo in doc 2021-03-18 13:29:21 -04:00
Gary Russell
634a73c9ff Doc Polishing for resetOffsets (#1045)
* Doc Polishing for resetOffsets

* Fix typos.
2021-03-18 13:13:18 -04:00
buildmaster
bc2f692964 Bumping versions to 3.1.3-SNAPSHOT after release 2021-03-16 13:53:15 +00:00
buildmaster
7eefe6c567 Going back to snapshots 2021-03-16 13:53:15 +00:00
buildmaster
c5d108ef89 Update SNAPSHOT to 3.1.2 2021-03-16 13:51:50 +00:00
Soby Chacko
47e7ca07a8 KTable event type routing
Introduce event type based routing for KTable types. This is
already available for KStream types.
See this commit: 386a361a66

Adding a new DeserialiationExceptionHander to address event type routing
use case for GlobalKTable.
See this comment: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003#issuecomment-799847819

Upgrade Spring Kafka to 2.6.6
Upgrade Spring Integration to 5.4.4
2021-03-15 20:23:16 -04:00
Soby Chacko
7cc001ac4c Support KStream lifecycle through binding endpoint (#1042)
* Support KStream lifecycle through binding endpoint

Introduce the ability for Kafka Streams application's lifecycle
management through actuator binding endpoints. Kafka Streams
only supports STOP and START operations. PAUSE/RESUME operations
that is available in regular message channel based binders
are not available in Kafka Streams binder.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1038
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/850

https://stackoverflow.com/questions/60282225/why-doesnt-kstreambinder-have-a-lifecycle-for-defaultbinding

* Addressing PR review comments

* Addressing PR review

* cleanup unused code
2021-03-15 16:14:19 -04:00
Soby Chacko
a7299df63f Cleanup Kafka Streams metrics support
StreamsListener (for Micrometer) is now directly available
in Spring Kafka starting from 2.5.3 (as KafkaStreamsMicrometerListener).
Removing the temporory interface added in the binder.

Addressing PR review comments.

Modifying tests to verify.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1040
2021-03-12 15:14:19 -05:00
Soby Chacko
33aa926940 Fix checkstyle 2021-03-09 17:22:23 -05:00
Soby Chacko
a1fb7f0a2d Kafka Streams function detection improvements (#1033)
* Kafka Streams function detection improvements

Allow Kafka Streams functions defined as Component beans
to be candidates for establishing bindings. Currently, Kafka Streams
functions need to be written as functional beans using @Bean.
Adding this improvement so that if applications prefer to write
the business logic using @Component, then it is possible to do so.

Adding test cases to verify the behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1030

* Kafka Streams functions and bean name overriding

Whn Kafka Streams function bean names are overridden,
there is an issue with scanning it properly for binding.
Addressing this issue.

* Adding docs for Component based model

* Addressing PR review comments
2021-03-09 15:50:10 -05:00
Soby Chacko
e2eca34e4b Cleaning up Kafka producer factories
Calling DisposableBean.destroy() on manually created
Kafka producer factories. This affects both Kafka and
Kafka Streams binders (in the case of Kafka Streams binder,
it only matters when using the DLQ support).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1034
2021-03-02 17:26:37 -05:00
Soby Chacko
bc02da2900 Docs cleanup
Apply changes from a previous commit to the proper docs file.

See the prior relevant commit at the URL below:

af5778d157

Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1032
2021-03-02 16:42:21 -05:00
Soby Chacko
1f52ec5020 Fix minor issue processing ackMode 2021-02-22 12:22:28 -05:00
Soby Chacko
394b8a6685 Restore autoCommitOffset behavior
Until autoCommitOffset is fully removed (it is deprecated already in 3.1),
honor the old behavior if the application uses the property.

Restore autoCommitOnError from deprecated status to active for polled consumers.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1026
2021-02-09 17:48:56 -05:00
Soby Chacko
32939e30fe AdminClient customizer (#1023)
* AdminClient customizer

Provide the ability for applications to customize AdminClient by
introducing a new interface AdminClientConfigCustomizer.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1014

* Addressing PR review comments
2021-02-03 10:25:03 -05:00
Surendra Galwa
cadb5422cc Fix property name in startOffset 2021-01-27 17:27:25 -05:00
buildmaster
99880ba216 Bumping versions to 3.1.2-SNAPSHOT after release 2021-01-27 17:52:34 +00:00
buildmaster
9e9e0f7ea3 Going back to snapshots 2021-01-27 17:52:34 +00:00
buildmaster
e1648083e6 Update SNAPSHOT to 3.1.1 2021-01-27 17:51:08 +00:00
Soby Chacko
87491118c3 Update spring-cloud-build parent 2021-01-22 15:59:11 -05:00
Soby Chacko
cf7acb23e8 Update Kafka dependencies 2021-01-22 15:12:29 -05:00
Soby Chacko
1fbb6f250e Make health indicator beans public
Currently, the health indicator beans for both binders are not
customizable due to package visibility issues. Making them public.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1015
2021-01-19 11:01:41 -05:00
Inigo
ae4abe4f33 fix property name in consuming batches 2021-01-06 14:54:08 -05:00
Marco Leichsenring
5d312228db Fix topic name confg in docs 2021-01-06 14:52:44 -05:00
Mehmet
69c5b67126 removed j letter from exception message
Not sure, but it seems like a typo.
2021-01-06 14:51:55 -05:00
Soby Chacko
83cfdfc532 Fix checkstyle issues 2021-01-05 16:33:15 -05:00
Soby Chacko
c6154eecfc Update copyright year for a previous commit 2021-01-05 16:02:56 -05:00
Soby Chacko
f8a4488a0e Update Kafka Streams binder health indicator
Update the health indicator code for Kafka Streams binder
to the latest changes made in the 3.0.x branch.
2021-01-05 15:58:24 -05:00
Soby Chacko
ffde5d35db Kafka Streams binder health indicator improvements
When using multi binder setup in Kafka Streams binder, there is an issue
in which the binder health indicator is not getting bootstapped due to a
ConditionalOnBean is unable to find a match for KafkaStreamsRegistry bean.
Fixing this issue by using an ObjectProvider instead of ConditionalOnBean.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1006
2021-01-05 15:55:40 -05:00
Oleg Zhurakousky
5921d464ed Fix snapshot versioning schema 2020-12-21 15:46:10 +01:00
buildmaster
023d3df7f7 Bumping versions to 3.1.1.SNAPSHOT after release 2020-12-21 12:28:52 +00:00
buildmaster
846534fe84 Going back to snapshots 2020-12-21 12:28:52 +00:00
buildmaster
2eedc8a218 Update SNAPSHOT to 3.1.0 2020-12-21 12:27:25 +00:00
buildmaster
49d263eafc Going back to snapshots 2020-12-11 15:12:31 +00:00
buildmaster
94f398a3e4 Update SNAPSHOT to 3.1.0-RC1 2020-12-11 15:11:13 +00:00
Soby Chacko
7cba801bef Checkstyle cleanup 2020-12-10 21:35:16 -05:00
Soby Chacko
db14154398 Cleanup docs in Kafka Streams binder
- Add StreamListener deprecation notice in docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1005
2020-12-10 21:30:57 -05:00
Soby Chacko
386a361a66 Event type based routing in Kafka Streams binder
Introducing the capability of routing records based on
event types. If a header in the incoming record contains
the event type set on the binding, then the function
associated with that binding gets invoked.

Adding test/docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003
2020-12-10 21:18:00 -05:00
hemeda3
27cb6f4b5f Fixed typo springc to spring
Fixed typo `springc` to `spring` in two lines: 
`spring.cloud.stream.function.bindings.process-in-0=users`
`spring.cloud.stream.function.bindings.process-in-0=regions`
2020-12-09 14:23:05 -05:00
Soby Chacko
58ac92ec71 Fix broken documentation links
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/986
2020-12-04 18:09:32 -05:00
Soby Chacko
d1c62bbc26 Kafka Streams binder producer/consumerProperties
Fix the issue of Kafka Streams binder level producerProperties
and consumerProperties (...binder.producerProperties and
...binder.consumerProperties), are not taken into consideration.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/997
2020-12-04 17:54:54 -05:00
Soby Chacko
afe13cc045 Adding docs for Meter filtering
KafkaBinderMetrics docs for meter fitlering.
Refact METRIC_NAME to OFFSET_LAG_METRIC_NAME and make it public.
2020-12-04 16:52:20 -05:00
Soby Chacko
42c9af019e Prodcer/Consumer config customizer changes
* Provide binding and destination names to the configure method
  in ProducerConfigCustomizer and ConsumerConfigCustomizer
  so that those can be used in the customization.
* Modify tests and docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/996
2020-12-04 16:16:10 -05:00
Soby Chacko
675c2e4940 KafkaBinderMetrics NoopGauge/filtering changes (#1001)
* KafkaBinderMetrics NoopGauge/filtering changes

Fixing the problem of KafkBinderMetrics scheduled task
for finding the offset lag gets triggered, even when the
guage is registered as a NoopGague in the MeterRegistry.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/995

* Addressing PR review comments

* Fix typo
2020-12-04 16:04:09 -05:00
Soby Chacko
a1b31e67c4 Support allowNonTransactional config
Add a new producer extended property for allowNonTransactional.
When set to true, records published to this output binding will
not be run in a transaction, unless one is already in process.

By default, all ouput bindings associated with a transactional
binder publishes in a new transaction. This new property can be
used to override this behavior.

Addressing PR review comments.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/990
2020-12-04 11:49:13 -05:00
Soby Chacko
5a7cc9f257 Fix Serde inference issues in Kafka Streams binder
When there are multiple functions present with different outbound target types,
there is an issue of one function overriding the target type of a previous function
in the catalogue where the binder stores the target type information.
This causes problems for the binder initiated Serde inference. Addressing the issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/994
2020-12-03 19:43:07 -05:00
Soby Chacko
fc184ba422 Setters for fields that need customization
See this commit in core fore details:

442c72b37c
2020-11-20 14:38:36 -05:00
Taras Danylchuk
526627854a Altering existing topics only allowed if opt-in
Altering existing topic configurations based on a new binder property.
Existing topic configurations are only modified if `autoAlterTopics` is
enabled. By default, this is disabled.

Adding tests to verify.

Docs.

Logging level changes.

Checkstyle fixes
2020-11-19 15:36:09 -05:00
Soby Chacko
6be541c070 Copy cert files from classpath to file-system (#989)
* Copy cert files from classpath to file-system

If `ssl.truststore.location` and `ssl.keystore.location` are
provided as classpath resources, convert them to absolute paths
on the filesystem. This is because of a restriction in the Kafka client
in which it does not allow certificates to be read from the classpath.

See these issues for more details:
https://issues.apache.org/jira/browse/KAFKA-7685
https://cwiki.apache.org/confluence/display/KAFKA/KIP-398%3A+Support+reading+trust+store+from+classpath

This commit allows the Spring Cloud Stream application to provide the
cert files as classpath: reosources, but the binder internally move
them to a locations on the local filesystem and then use that absolute
path as the value for cert locations. Adding properties for optional paths
to move the files to. If no values are provided for these properties,
then use the system's /tmp directory.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/985

* Addressing PR review comments

* Addressing further PR review comments

* Consolidate keystore/truststore filesystem locations into a single property

* Addressing PR review
2020-11-17 13:49:04 -05:00
buildmaster
1afb22d65f Going back to snapshots 2020-11-17 16:04:44 +00:00
buildmaster
097fb89d9e Update SNAPSHOT to 3.1.0-M4 2020-11-17 16:03:30 +00:00
Oleg Zhurakousky
0ad0a31b4e Disabled JAAS test 2020-11-17 16:53:27 +01:00
Oleg Zhurakousky
19e97dd9c4 Update versions of spring kafka and spring integration kafka 2020-11-17 16:26:07 +01:00
gustavomonarin
676764b923 Fix documentation for kafka streams concurrency on binder level
The property spring.cloud.stream.binder.configuration.num.stream.threads does not work and is silently ignored.

The property spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads works fine and  is already covered by tests at MultipleFunctionsInSameAppTests#125

resolves #987
2020-11-11 13:55:35 -05:00
Soby Chacko
d8a678c77e Fix KafkaBinderTests - testRecordMetadata 2020-11-11 13:40:19 -05:00
Soby Chacko
23ce9e3d6e Fix NPE in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/981
2020-11-05 12:34:59 -05:00
Soby Chacko
73921db3ec Fixing test failure
Fixing Kafka Streams binder retry tests due to a
wrong conditional check.
2020-11-05 11:05:22 -05:00
Soby Chacko
f1e3a0bdd6 Allow retries in Kafka Streams binder (#980)
* Allow retries in Kafka Streams binder

Provide applications the capability to retry critical sections of the
business logic. This is accomplished through a new API using which
critical path can be wrapped inside a Callable.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/945

* Reworking the PR

Remove the binder API that was added before for retrying.
Reuse binding provided RetryTemplate.

Tests and docs

* Cleanup

* Addressing PR review comments

* Fix typo
2020-10-30 16:39:49 -04:00
Soby Chacko
97e3b61d14 Adding improvements to InteractiveQueryService
New API additions.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/942
2020-10-28 18:58:18 -04:00
Soby Chacko
50f4470fcf Update deprecated API usage in InteractiveQueryService
Use queryMetadataForKey instead of metadataForKey

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/941
2020-10-28 15:24:09 -04:00
Soby Chacko
45f1927c6f Expand messageKeyExpression docs (#978)
* Expand messageKeyExpression docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/972

* Update docs/src/main/asciidoc/overview.adoc

Co-authored-by: Gary Russell <grussell@vmware.com>

Co-authored-by: Gary Russell <grussell@vmware.com>
2020-10-27 11:10:33 -04:00
Soby Chacko
0a0d3a1057 Remove duplicate KafkaStreams topology endpoint
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/895
2020-10-22 12:31:40 -04:00
Soby Chacko
5cdd8a09f9 Custom DLQ Destination Resolver (#976)
* Custom DLQ Destination Resolver

Allow applications to provide a custom DLQ destination resolver
implementaiton by providing a new interface DlqDestinationResolver
as part of binder's public contract. This interface is a BiFunction
extension using which the applications can provide more fine grained
control over where to route records in error.

Adding test to verify.

Adding docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/966

* Add DlqDestinationResolver to MessageChannel based binder.

Tests and docs
2020-10-21 12:02:02 -04:00
Soby Chacko
514530db22 Upgrade dependencies
Spring Kafka -> 2.6.3-SNAPSHOT
Spring Integration Kafka -> 5.4.0-SNAPSHOT
Kafka version -> 2.6.0
Use Kafka_2.13 for tests

Ungignore the Jaas security tests.
Unignore a few Kafka Streams binder tests.
2020-10-16 16:44:32 -04:00
Gary Russell
50ec8f0919 Change Log etc. Replication Factor
If the user has not explicitly set the `SreamsConfig.REPLICATION_FACTOR_CONFIG`,
set it from the binder property.

This is used for infrastructure topics (change logs and repartition topics).
2020-10-15 16:22:41 -04:00
Gary Russell
829d1c651a GH-968: Propagate Application Context
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/968
2020-10-12 13:59:25 -04:00
Soby Chacko
200a5efb64 Simplifying bootstrap-server config evaluation
If both boot and binder level config for bootstrap servers are present, the boot
one always wins currently regardless of any binder settings, unless the boot one
evaluates to the default (localhost:9092). This is especially a problem in a
multi binder scenario. Addressing this issue by simplifying the evaluation and
always gives the binder config the highest precedence.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/967
2020-10-12 13:18:47 -04:00
Soby Chacko
7f3a7f856f Kafka binder metrics improvements (#965)
* Kafka binder metrics improvements

KafkaBinderMetrics has a blocking call in which it waits for the default timeout of
60 seconds if Kafka broker is down. This happens for each topic within a consumer
group. Refactor this code, so that we have this check performed in a periodic task
and if the runtime check fails to return within a smaller timewindow (5 seconds),
return immediately by providing the latest value from the periodic task results.

Periodic task for computing the lags is run every 60 seconds.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/809

* Addressing PR review
2020-10-05 13:19:01 -04:00
buildmaster
1358bdfeec Going back to snapshots 2020-10-02 14:10:39 +00:00
buildmaster
f8dac888e4 Update SNAPSHOT to 3.1.0-M3 2020-10-02 14:09:20 +00:00
Soby Chacko
50380dae69 Ignore a few Kafka Streams tests temporarily 2020-10-02 09:12:22 -04:00
Oleg Zhurakousky
23b9d5b4c6 Temporary disable failing tests 2020-10-02 12:59:18 +02:00
buildmaster
7bebe9b78f Bumping versions 2020-09-25 10:42:44 +00:00
Oleg Zhurakousky
c44c17008c Fix docs links 2020-09-24 16:22:55 +02:00
Soby Chacko
33fa713a9f Customizing producer/consumer factories (#963)
* Customizing producer/consumer factories

Adding hooks by providing Producer and Consumer config
customizers to perform advanced configuration on the producer
and consumer factories.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/960

* Addressing PR review comments

* Further PR updates
2020-09-23 13:21:31 -04:00
Soby Chacko
769ed56049 Deprecate autoCommitOffset in favor of ackMode (#957)
* Deprecate autoCommitOffset in favor or ackMode

Deprecate autoCommitOffset in favor of using a newly introduced consumer property ackMode.
If the consumer is not in batch mode and if ackEachRecord is enabled, then container
will use RECORD ackMode. Otherwise, use the provided ackMode using this property.
If none of these are true, then it will defer to the default setting of BATCH ackMode
set by the container.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/877

* Address PR review comments

* Addressing PR review comments
2020-09-21 11:16:51 -04:00
Heiko Does
859952b32a GH-951 update TopicConfig when diffrent from topic properties
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/951
2020-09-11 16:02:52 -04:00
cleverchuk
1e9aa60c4e Fix Health indicator/partition leader issues
* Fix health indicator to properly indicate partition failure
 * Add new flag to control binder health indicator behavior
 * Regardless of the consumer that is reading from a partition, if the binder
   detects that a partition for the topic is without a leader, mark the binder
   health as DOWN (if the flag is set to true).
 * Remove synchronize block since only one thread executes the block
 * Add Docs for the new binder flag
 * Fix checkstyle issues
2020-09-11 12:06:28 -04:00
Soby Chacko
4161f875ed Change default replication factor to -1 (#956)
* Change default replication factor to -1

Binder now uses a default value of -1 for replication factor signaling the
broker to use defaults. Users who are on Kafka brokers older than 2.4,
need to set this to the previous default value of 1 used in the binder.

In either case, if there is an admin policy that requires replication factor > 1,
then that value must be used instead.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/808

* Addressing PR review comments
2020-08-28 12:18:53 -04:00
Soby Chacko
f2e543c816 Remove unnecessary configs from binder
Following two properties are removed from the producer configs in the binder.

(ProducerConfig.RETRIES_CONFIG, 0)
(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432)

In the case of RETRIES, this is a bug to override the default in Kafka client (MAX_INT) and
in the latter case, this is unnecessary as this is the same default in the client.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/952
2020-08-26 18:00:45 -04:00
Soby Chacko
ac4d9298c9 Fix test failures in KafkaBinderMeterRegistryTest 2020-08-26 17:11:49 -04:00
ncheema
34c4efb35c Fix micrometer configuration for multiBinder
- In MultiBinder configuration, MeterRegistery is loaded in the outterContext,
   hence removing the conditional on MeterRegistry bean check

 - Fix checkstyle issues
2020-08-18 17:59:52 -04:00
Aldo Sinanaj
5b3b92cdb9 GH-932: Added zstd compression type 2020-08-13 15:57:44 -04:00
Christian Tzolov
a6ac6e3221 Fix mime type all support for KafkaNullConverter 2020-08-06 14:21:23 -04:00
Oleg Zhurakousky
6280489be8 Ad RSocket snapshot repo 2020-08-06 19:26:28 +02:00
Navjot Cheema
80df80806b Producer compressionType documentation updated 2020-07-20 13:59:34 -04:00
buildmaster
ec52fbe2eb Going back to snapshots 2020-07-20 16:28:42 +00:00
buildmaster
5e128aafdc Update SNAPSHOT to 3.1.0-M2 2020-07-20 16:27:22 +00:00
Oleg Zhurakousky
f8d0b8bde2 Change SIK version to 3.3.0.RELEASE 2020-07-20 18:13:04 +02:00
Soby Chacko
4159217023 Kafka Streams metrics changes
* Kafka Streams metrics in the binder for Boot 2.2 users are streamlined to
  reflect the Micrometer native support added with version 1.4.0 which is
  available through Boot 2.3. While Boot 2.3 users will get this native support
  from  Micrometer, Boot 2.2 users will still rely on the custom implementation
  in the binder. This commit aligns that custom implemenation more with
  the native implementation.

* Disable the custom Kafka Streams metrics bean which is mentioned above
  (KafkaStreamsBinderMetrics) when the application is on Boot 2.3, as this
  implementation is only applicable for Boot 2.2.x.

* Update docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/880
2020-07-17 20:28:47 -04:00
Jay Bryant
087b09d193 Wording changes (#934)
* Wording changes

Replacing some terms

* Wording changes

One more change (missed a gerund form).
2020-07-14 11:34:54 -04:00
Gary Russell
83c42a55e9 GH-935: Fix KafkaNullConverter supported MimeType
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/935

Spring Cloud Function now checks if a converter supports the mime type before
calling it. Previously, the converter supported no mime types, so it was never
called, breaking Kafka Tombstone record processing (outbound).

The converter must support all mime types so it can perform a no-op conversion,
retaining the `KafkaNull`.

The abstract converter will return null whenever the payload is not a `KafkaNull`,
which is a signal to spring-cloud-function to try the next converter.
2020-07-13 17:24:22 -04:00
Gary Russell
1edc2c714a Upgrade parent pom to spring-kafka 2.5.3
- match the version used by Boot 2.4.0-SNAPSHOT
- also SIK 3.3.1.BUILD-SNAPSHOT
2020-07-13 14:21:13 -04:00
Soby Chacko
dc7c8d657a Fixing javadoc errors
Resovles https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/915
2020-06-16 18:37:30 -04:00
Soby Chacko
a549810899 Renaming Kafka Streams topology actuator endpoint
Kafka Streams topology actuator endpoint had a conflict with the JMX exporter
and was causing some IDE issues. Renming this endpoint to kafkastreamstopology.
Renaming the underlying methods in this actuator endpoint implentation.

Updating docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/895
2020-06-15 18:56:18 -04:00
Soby Chacko
aff8b8487c Add junit-vintage-engine dependency
With Boot 2.4, JUnit 4 tests need this dependency.

Fix tests.
2020-06-15 17:01:48 -04:00
Gary Russell
95bc54b991 GH-914: Add Micrometer Producer/Consumer Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/914

Boot no longer uses the deprecated JMX MBean scraping provided by Micrometer.

Add configuration to add Micrometer Meters when Micrometer and spring-kafka 2.5.x are on
the class path.

Micrometer for Streams

- work around until SK 2.5.3
2020-06-15 11:36:49 -04:00
Marcin Grzejszczak
38bea7e7da Added missing JAR packaging 2020-06-08 18:42:49 +02:00
Marcin Grzejszczak
201feca206 Migrated to new documentation generation 2020-06-08 18:41:25 +02:00
Soby Chacko
21a8f89f22 Update version to 3.1.0-SNAPSHOT 2020-06-02 13:58:54 -04:00
Soby Chacko
7fc9145a21 Fix a failing test due to application ID clash 2020-06-02 12:56:08 -04:00
Soby Chacko
473ca53723 Configuring Kafka producer timeout
Allow setting timeout for closing the producer.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/891
Resolves #909
2020-06-02 12:54:42 -04:00
Soby Chacko
788dc28d7e Simplify the handling of concurrency
In Kafka Streams binder, we use a complex strategy to handle
applications provided concurrency. Simplify this process while
making the binder stay aligned with boot -> binder -> binding
hierrachy for concurrency (num.stream.threads).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/905
Resolves #907
2020-06-02 12:42:14 -04:00
Soby Chacko
dbc00ffa92 Docs changes for consumer batch.mode
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/892\#issuecomment-631862705
2020-06-02 12:38:44 -04:00
Soby Chacko
240ae8282e KafkaTopicProvisioner improvements
Use a retry template for topic description method call through
admin client when provisioning producer destinations. We are
retrying because in the event this operation gets failed, it
is retried with the default retry settings in the provisioner.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/888
2020-06-02 12:38:29 -04:00
Soby Chacko
7adb10bcd2 Adding state store beans for KTable binding
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/897
2020-06-02 12:38:14 -04:00
Soby Chacko
8be9fe6abc Ignore test 2020-06-02 12:37:51 -04:00
Soby Chacko
04e0bf628c Kafka Streams binder test triage.
Fixing two tests in Kafka Streams binder that fail with the full suite.
See the comments on the code committed for more details.
2020-06-02 12:37:37 -04:00
Soby Chacko
79323588ba Ignore a test temporarily 2020-06-02 12:37:17 -04:00
Soby Chacko
395683fdaf Fixing random test failure on CI 2020-06-02 12:36:55 -04:00
Soby Chacko
2a25f7c1f3 Cleanup test 2020-06-02 12:36:13 -04:00
Soby Chacko
5fa88e11ec Fix checkstyle 2020-06-02 12:35:45 -04:00
Soby Chacko
ee3279072d Fixing a random test failure on CI 2020-06-02 12:35:20 -04:00
Soby Chacko
fa0d6fe60e GH-899: num.stream.threads not taking effect
Binder is overriding the num.stream.threads property specified
through the binder configuration. Fixing this issue.

Adding tests to verify.

Doc changes

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/899
2020-06-02 12:34:50 -04:00
Soby Chacko
6578a996cb Update versions 2020-06-02 12:33:48 -04:00
Roberto Matas
af5778d157 Property name min.fetch.bytes renamed to the correct one 2020-05-21 17:01:35 -04:00
Soby Chacko
a190289bff Kafka Streams binder changes
Update API calls in health indicator.
Update tests.
Ignore two tests temporarily
2020-04-30 21:42:49 -04:00
Gary Russell
7091a641d6 Fix test: mock Producer to return a Future 2020-04-30 17:57:38 -04:00
Soby Chacko
c408f91162 Update versions
Spring Kafka -> 2.5.0.RC1
Kafka client -> 2.5.0
Spring Integration Kafka -> 3.3.0.RC1
2020-04-30 17:46:19 -04:00
Gary Russell
1cfb63f4bf Reject improper settings for bootstrap.servers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/875

- the `bootstrap.servers` property cannot be overridden at the binding level.
2020-04-30 17:08:03 -04:00
Soby Chacko
1bf1393c6e Fix the failing test
The test was failing due to wrong broker address.
2020-04-16 12:33:11 -04:00
Soby Chacko
85663ebea7 Different topic names in test for DLQ.
Unignore the test that was ignored previously.
2020-04-16 11:47:14 -04:00
Soby Chacko
c17752b02d Troubleshooting a CI test failure 2020-04-16 11:32:41 -04:00
Soby Chacko
52c0b35add GH-657: DLQ producer properties from the binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/657

Adding the ability for the binder to detect DLQ producer properties
set on the binder as common producer properties.

Adding test to verify.
2020-04-16 10:15:23 -04:00
buildmaster
36f00e5867 Going back to snapshots 2020-04-07 17:19:16 +00:00
buildmaster
e7487b9ada Update SNAPSHOT to 3.1.0.M1 2020-04-07 17:18:40 +00:00
Oleg Zhurakousky
064f5b6611 Updated spring-kafka version to 2.4.4 2020-04-07 19:05:46 +02:00
Oleg Zhurakousky
ff859c5859 update maven wrapper 2020-04-07 18:45:35 +02:00
Gary Russell
445eabc59a Transactional Producer Doc Polishing
- explain txId requirements for producer-initiated transactions
2020-04-07 11:16:14 -04:00
Soby Chacko
cc5d1b1aa6 GH:851 Kafka Streams topology visualization
Add Boot actuator endpoints for topology visualization.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/851
2020-03-25 13:02:57 +01:00
Soby Chacko
db896532e6 Offset commit when DLQ is enabled and manual ack (#871)
* Offset commit when DLQ is enabled and manual ack

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
2020-03-24 16:28:39 -04:00
Gary Russell
07a740a5b5 GH-861: Add transaction manager bean override
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/861

Add docs
2020-03-19 17:03:36 -04:00
Serhii Siryi
65f8cc5660 InteractiveQueryService API changes
* Add InteractiveQueryService.getAllHostsInfo to fetch metadata
   of all instances that handles specific store.
 * Test to verify this new API (in the existing InteractiveQueryService tests).
2020-03-05 13:24:32 -05:00
Soby Chacko
90a47a675f Update Kafka Streams docs
Fix missing property prefix in StreamListener based applicationId settings.
2020-03-05 11:40:10 -05:00
Soby Chacko
9d212024f8 Updates for 3.1.0
spring-cloud-build to 3.0.0 snapshot
spring kafka to 2.4.x snapshot
SIK 3.2.1

Remove a test that has behavior inconsistent with new changes in Spring Kafka 2.4
where all error handlers have isAckAfterHandle() true by default. The test for
auto commit offset on error without dlq was expecting this acknowledgement to not
to occur. If applications need to have the ack turned off on error, they should provide a
container customizer where it sets the ack to false. Since this is not a binder
concern, we are removing the test testDefaultAutoCommitOnErrorWithoutDlq.

Cleaning up in Kafka Streams binder tests.
2020-03-04 18:37:08 -05:00
Gary Russell
d594bab4cf GH-853: Don't propagate out "internal" headers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/853
2020-02-27 13:37:58 -05:00
Oleg Zhurakousky
e46bd1f844 Update POMs for Ivyland 2020-02-14 11:28:28 +01:00
buildmaster
5a8aad502f Bumping versions to 3.0.3.BUILD-SNAPSHOT after release 2020-02-13 08:31:42 +00:00
buildmaster
b84a0fdfc6 Going back to snapshots 2020-02-13 08:31:41 +00:00
buildmaster
e3fcddeab6 Update SNAPSHOT to 3.0.2.RELEASE 2020-02-13 08:30:55 +00:00
Soby Chacko
1b6f9f5806 KafkaStreamsBinderMetrics throws ClassCastException
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/814

Fixing an erroneous cast.
2020-02-12 11:53:19 -05:00
Oleg Zhurakousky
f397f7c313 Merge pull request #845 from sobychacko/gh-844
Kafka streams concurrency with multiple bindings
2020-02-12 16:38:53 +01:00
Oleg Zhurakousky
7e0b923c05 Merge pull request #843 from sobychacko/gh-842
Kafka Streams binder health indicator issues
2020-02-12 16:37:11 +01:00
Soby Chacko
0dc5196cb2 Fix typo in a class name 2020-02-11 17:55:50 -05:00
Soby Chacko
1cc50c1a40 Kafka streams concurrency with multiple bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844

Fixing an issue where the default concurrency settings are overridden when
there are multiple bindings present with non-default concurrency settings.
2020-02-11 13:21:58 -05:00
Adriano Scheffer
0b3d508fe2 Always call value serde configure method
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836

Remove redundant call to serde configure

Closes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836
2020-02-10 19:57:37 -05:00
Soby Chacko
2acce00c74 Kafka Streams binder health indicator issues
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/842

When a health indicator is run against a topic with multiple partitions,
the Kafka Streams binder overwrites the information. Addressing this issue.
2020-02-10 19:42:27 -05:00
Łukasz Kamiński
ce0376ad86 GH-830: Enable usage of authorizationExceptionRetryInterval
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/830

Enable binder configuration of authorizationExceptionRetryInterval through properties
2020-01-17 11:18:02 -05:00
Soby Chacko
25ac3b75e3 Remove log4j from Kafka Streams binder 2020-01-07 17:05:00 -05:00
Gary Russell
6091a1de51 Provisioner: Fix compiler warnings
- logger is static
- missing javadocs
2020-01-06 13:52:15 -05:00
Soby Chacko
5cfce42d2e Kafka Streams multi binder configuration
In the mutli binder scenario, make KafkaBinderConfigurationProperties conditional
so that it only creates such a bean in the correspodig multi binder context.
For normal cases, KafkaBinderConfigurationProperties bean is created by the main context.
2019-12-23 18:59:16 -05:00
buildmaster
349fc85b4b Bumping versions to 3.0.2.BUILD-SNAPSHOT after release 2019-12-18 18:12:34 +00:00
buildmaster
a4762ad28b Going back to snapshots 2019-12-18 18:12:34 +00:00
buildmaster
bdd4b3e28b Update SNAPSHOT to 3.0.1.RELEASE 2019-12-18 18:11:50 +00:00
Soby Chacko
3ff99da014 Update parent s-c-build to 2.2.1.RELEASE 2019-12-18 12:07:39 -05:00
Soby Chacko
c7c05fe3c2 Docs for topic patterns in Kafka Streams binder 2019-12-17 18:43:45 -05:00
문찬용
4b50f7c2f5 Fix typo 2019-12-17 18:41:13 -05:00
stoetti
88c5aa0969 Supporting topic pattern in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/820

Fix checkstyle issues
2019-12-17 18:38:36 -05:00
Oleg Zhurakousky
d5a72a29d0 GH-815 polishing
Resolves #816
2019-12-17 10:55:47 +01:00
Soby Chacko
8c3cb8230b Addressing mulit binder issues with Kafka Streams
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/815

There was an issue with Kafka Streams multi binders in which the properties were not scanned
properly. The last configuation is always won, wiping out any prevous enviroment properties.
Addressing this issue by properly keeping KafkaBinderConfigurationProperties per multi binder
environment and explicity invoking Boot properties binding on them.

Adding test to verify.
2019-12-12 18:08:53 -05:00
Soby Chacko
70c1ce8970 Update image for spring.io kafka streams blog 2019-12-02 11:22:39 -05:00
Soby Chacko
1d8a4e67d2 Update image for spring.io kafka streams blog 2019-12-02 11:16:29 -05:00
buildmaster
bbfc776395 Bumping versions to 3.0.1.BUILD-SNAPSHOT after release 2019-11-22 14:51:01 +00:00
buildmaster
4e9ed30948 Going back to snapshots 2019-11-22 14:51:01 +00:00
buildmaster
34fd7a6a7a Update SNAPSHOT to 3.0.0.RELEASE 2019-11-22 14:49:08 +00:00
Oleg Zhurakousky
b23b42d874 Ignoring intermittently failing test 2019-11-22 15:36:23 +01:00
Soby Chacko
cf59cfcf12 Kafka Streams docs cleanup 2019-11-20 18:25:12 -05:00
Soby Chacko
02a4fcb144 AdminClient caching in KafkaStreams health check 2019-11-20 17:36:14 -05:00
Oleksii Mukas
6ac9c0ed23 Closing adminClient prematurely
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/803
2019-11-20 17:34:09 -05:00
Soby Chacko
78ff4f1a70 KafkaBindingProperties has no documentation (#802)
* KafkaBindingProperties has no documentation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/739

No popup help in IDE due to missing javadocs in KafkaBindingProperties,
KafkaConsumerProperties and KafkaProducerProperties.

Some IDEs currently only show javadocs on getter methods from POJOs
used insdide a map. Therefore, some javadocs are duplicated between
fields and getter methods.

* Addressing PR review comments

* Addressing PR review comments
2019-11-14 12:50:15 -05:00
Soby Chacko
637ec2e55d Kafka Streams binder docs improvements
Adding docs for production exception handlers.
Updating configuration options section.
2019-11-13 15:19:41 -05:00
Soby Chacko
6effd58406 Adding docs for global state stores 2019-11-13 11:19:36 -05:00
Soby Chacko
7b8f0dcab7 Kafka Streams - DLQ control per consumer binding (#801)
* Kafka Streams - DLQ control per consumer binding

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/800

* Fine-grained DLQ control and deserialization exception handlers per input binding

* Deprecate KafkaStreamsBinderConfigurationProperties.SerdeError in preference to the
  new enum `KafkaStreamsBinderConfigurationProperties.DeserializationExceptionHandler`
  based properties

* Add tests, modifying docs

* Addressing PR review comments
2019-11-13 09:41:02 -05:00
Gary Russell
88912b8d6b GH-715: Add retry, dlq for transactional binders
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/715

When using transactions, binder retry, dlq cannot be used because the retry
runs within the transaction, which is undesirable if there is another resource
involved; also publishing to the DLQ could be rolled back.

Use the retry properties to configure an `AfterRollbackProcessor` to perform the
retry and DLQ publishing after the transaction has rolled back.

Add docs; move the container customizer invocation to the end.
2019-11-12 14:53:12 -05:00
Soby Chacko
34b0945d43 Changing the order of calling customizer
In the Kafka Streams binder, StreamsBuilderFactoryBean customzier was being called
prematurely before the object is created. Fixing this issue.

Add a test to verify
2019-11-11 17:21:46 -05:00
Gary Russell
278ba795d0 Configure consumer customizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1841

* Add test
2019-11-11 11:34:41 -05:00
buildmaster
bf30ecdea1 Going back to snapshots 2019-11-08 16:29:53 +00:00
buildmaster
464ce685bb Update SNAPSHOT to 3.0.0.RC2 2019-11-08 16:29:19 +00:00
Oleg Zhurakousky
54fa9a638d Added flatten plug-in 2019-11-08 16:50:10 +01:00
Soby Chacko
fefd9a3bd6 Cleanup in Kafka Streams docs 2019-11-07 18:07:27 -05:00
Soby Chacko
8e26d5e170 Fix typos in the previous PR commit 2019-11-06 21:23:19 -05:00
Soby Chacko
ac6bdc976e Reintroduce BinderHeaderMapper (#797)
* Reintrouce BinderHeaderMapper

Provide a  custom header mapper that is identical to the DefaultKafkaHeaderMapper in Spring Kafka.
This is to address some interoperability issues between Spring Cloud Stream 3.0.x and 2.x apps,
where mime types in the header are not de-serialized properly. This custom BinderHeaderMapper
will be eventually deprecated and removed once the fixes are in Spring Kafka.

Resolves #796

* Addressing review

* polishing
2019-11-06 21:14:20 -05:00
Ramesh Sharma
65386f6967 Fixed log message to print as value vs key serde 2019-11-06 09:43:08 -05:00
Oleg Zhurakousky
81c453086a Merge pull request #794 from sobychacko/gh-752
Revise docs
2019-11-06 09:11:16 +01:00
Soby Chacko
0ddd9f8f64 Revise docs
Update kafka-clients version
Revise Kafka Streams docs

Resolves #752
2019-11-05 19:49:07 -05:00
Gary Russell
d0fe596a9e GH-790: Upgrade SIK to 3.2.1
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/790
2019-11-05 11:41:40 -05:00
Soby Chacko
062bbc1cc3 Add null check for KafkaStreamsBinderMetrics 2019-11-01 18:42:30 -04:00
Soby Chacko
bc1936eb28 KafkaStreams binder metrics - duplicate entries
When the same metric name is repeated, there are some registry implementations such as the
micrometer Prometheus registry fail to register the duplicate entry. Fixing this issue by
restricting the duplicate metric names not to be registered.

Also, address an issue with multiple processors and metrics in the same application
by prepending the application ID of the Kafka Streams processor in the metric name itself.

Resolves #788
2019-11-01 15:55:30 -04:00
Soby Chacko
e2f1092173 Kafka Streams binder health indicator improvements
Caching AdminClient in Kafk Streams binder health indicator.

Resolves #791
2019-10-31 16:15:56 -04:00
Soby Chacko
06e5739fbd Addressing bugs reported by Sonarqube
Resolves #747
2019-10-25 16:42:02 -04:00
Soby Chacko
ad8e67fdc5 Ignore a test where there is a race condition 2019-10-24 12:00:22 -04:00
Soby Chacko
a3fb4cc3b3 Fix state store registration issue
Fixing the issue where state store is registered for each input binding
when multiple input bindings are present.

Resolves #785
2019-10-24 11:33:31 -04:00
Soby Chacko
7f09baf72d Enable customization on StreamsBuilderFactoryBean
Spring Kafka provides a StreamsBuilderFactoryBeanCustomizer. Use this in the binder so that the
applicatons can plugin in such a bean to further customize the StreamsBuilderFactoryBean and KafkaStreams.

Resolves #784
2019-10-24 09:35:37 -04:00
Soby Chacko
28a02cda4f Multiple functions and definition property
In order to make Kafka Streams binder based function apps more consistent
with the wider functional support in ScSt, it should require the proprety
spring.cloud.stream.fucntion.definition to signal which functions to activate.

Resolves #783
2019-10-24 01:01:34 -04:00
Soby Chacko
f96a9f884c Custom partitioner for Kafka Streams producer
* Allow the ability to plug in custom StreamPartitioner on the Kafka Streams producer.
* Fix a bug where the overriding of native encoding/decoding settings by the binder was not
  workging properly. This fix is done by providing a custom ConfigurationPropertiesBindHandlerAdvisor.
* Add test to verify

Resolves #782
2019-10-23 20:12:08 -04:00
Soby Chacko
a9020368e5 Remove the usage of BindingProvider 2019-10-23 16:32:07 -04:00
Gary Russell
ca9296dbd2 GH-628: Add dlqPartitions property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/628

Allow users to specify the number of partitions in the dead letter topic.

If the property is set to 1 and the `minPartitionCount` binder property is 1,
override the default behavior and always publish to partition 0.
2019-10-23 15:33:20 -04:00
Soby Chacko
e8d202404b Custom timestamp extractor per binding
Currenlty there is no way to pass a custom timestamp extractor
per consumer binding in Kafka Streams binder. Adding this ability.

Resolves #640
2019-10-23 15:31:17 -04:00
Artem Bilan
5794fb983c Add ProducerMessageHandlerCustomizer support
Related to https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/265
and https://github.com/spring-cloud/spring-cloud-stream/pull/1828
2019-10-23 14:30:20 -04:00
Oleg Zhurakousky
1ce1d7918f Merge pull request #777 from sobychacko/gh-774
Null value in outbound KStream
2019-10-23 16:54:21 +02:00
Oleg Zhurakousky
1264434871 Merge pull request #773 from sobychacko/gh-552
Kafka/Streams binder health indicator improvements
2019-10-23 16:53:37 +02:00
Massimiliano Poggi
e4fed0a52d Added qualifiers to CompositeMessageConverterBean injections
When more than one CompositeMessageConverter bean was defined in the same ApplicationContext,
not having the qualifiers on the injection points was causing the application to fail during
instantiation due to bean conflicts being raised. The injection points for CompositeMessageConverter
have been marked with the appropriate qualifier to inject the Spring Cloud CompositeMessageConverter.

Resolves #775
2019-10-22 17:17:03 -04:00
Soby Chacko
05e2918bc0 Addressing PR review comments 2019-10-22 17:07:51 -04:00
Gary Russell
6dadf0c104 GH-763: Add DlqPartitionFunction
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/763

Allow users to select the DLQ partition based on

* consumer group
* consumer record (which includes the original topic/partition)
* exception

Kafka Streams DLQ docs changes
2019-10-22 16:10:29 -04:00
Soby Chacko
f4dcf5100c Null value in outbound KStream
When native encoding is disabled, the conversion on outbound fails if the record
value is a null. Handle this scenario more graceful by allowing the record
to be sent downstream by skipping the conversion.

Resolves #774
2019-10-22 12:45:14 -04:00
Soby Chacko
b8eb41cb87 Kafka/Streams binder health indicator improvements
When both binders are present, there were ambiguities in the way the binders
were reporting health status. If one binder does not have any bindings, the
total health status was reported as down. Fixing these ambiguiltes as below.

If both binders have bindings present and Kafka broker is reachable, report
the status as UP and the associated details. If one of the binder does not
have bindings, but Kafka broker can be reached, then that particular binder's
status will be marked as UNKNOWN and the overall status is reported as UP.
If Kafka broker is down, then both binders are reported as DOWN and
the overall status is marked as DOWN.

Resolves #552
2019-10-21 21:59:11 -04:00
Soby Chacko
82cfd6d176 Making KafkaStreamsMetrics object Nullable 2019-10-18 21:41:16 -04:00
Tenzin Chemi
6866eef8b0 Update overview.adoc 2019-10-18 15:25:32 -04:00
Soby Chacko
b833a9f371 Fix Kafka Streams binder health indicator issues
When there are multiple Kafka Streams processors present, the health indicator
overwrites the previous processor's health info. Addressing this issue.

Resolves #771
2019-10-17 19:26:41 -04:00
Soby Chacko
0283359d4a Add a delay before the metrics assertion 2019-10-15 15:11:05 -04:00
Soby Chacko
cc2f8f6137 Assertion was not commented out in the previous commit 2019-10-10 10:38:46 -04:00
Soby Chacko
855334aaa3 Disable kafka streams metrics test temporarily 2019-10-10 01:32:51 -04:00
Soby Chacko
9d708f836a Kafka streams default single input/output binding
Address an issue in which the binder still default to binding names "input" and "output"
in case of a single function.
2019-10-09 23:58:14 -04:00
Soby Chacko
ecc8715b0c Kafka Streams binder metrics
Export Kafka Streams metrics available through KafkaStreams#metrics into a Micrometer MeterRegistry.
Add documentation for how to access metrics.
Modify test to verify metrics.

Resolves #543
2019-10-09 00:32:16 -04:00
Soby Chacko
98431ed8a0 Fix spurious warnings from InteractiveQueryService
Set applicationId properly in functions with multiple inputs
2019-10-09 00:29:03 -04:00
Oleg Zhurakousky
c7fa1ce275 Fix how stream function properties for bindings are used 2019-10-08 06:05:50 -05:00
Soby Chacko
cc8c645c5a Address checkstyle issues 2019-10-04 09:00:15 -04:00
Gary Russell
21fe9c75c5 Fix race in KafkaBinderTests
`testDlqWithNativeSerializationEnabledOnDlqProducer`
2019-10-03 10:35:05 -04:00
Soby Chacko
65dd706a6a Kafka Streams DLQ enhancements
Use DeadLetterPublishingRecoverer from Spring Kafka instead of custom DLQ components in the binder.
Remove code that is no longer needed for DLQ purposes.
In Kafka Streams, always set group to application id if the user doesn't set an explicit group.

Upgrade Spring Kafka to 2.3.0 and SIK to 3.2.0

Resolves #761
2019-10-02 22:46:51 -04:00
Soby Chacko
a02308a5a3 Allow binding names to be reused in Kafka Streams.
Allow same binding names to be reused from multiple StreamListener methods in Kafka Streams binder.

Resolves #760
2019-10-01 13:58:46 -04:00
Oleg Zhurakousky
ac75f8fecf Merge pull request #758 from sobychacko/gh-757
Function level binding properties
2019-09-30 12:42:45 -04:00
Soby Chacko
bf6a227f32 Function level binding properties
If there are multiple functions in a Kafka Streams application, and if they want to have
a separate set of configuration for each, then it should be able to set that at the function
level. For e.g. spring.cloud.stream.kafka.streams.binder.functions....

Resolves #757
2019-09-30 11:47:21 -04:00
Oleg Zhurakousky
01daa4c0dd Move function constants to core
Resolves #756
2019-09-30 11:33:18 -04:00
Soby Chacko
021943ec41 Using dot(.) character in function bindings.
In Kafka Streams functions, binding names need to use dot character instead of
underscores as the delimiter.

Resolves #755
2019-09-30 11:02:50 -04:00
Phillip Verheyden
daf4b47d1c Update the docs with the correct default channel properties locations
Spring Cloud Stream Kafka uses `spring.cloud.stream.kafka.default` for setting default properties across all channels.

Resolves #748
2019-09-30 11:01:28 -04:00
Soby Chacko
2b1be3754d Remove deprecations
Remove deprecated fields, methods and classes in preparation for the 3.0 GA Release,
both in Kafka and Kafka Streams binders.

Resolves #746
2019-09-27 15:46:00 -04:00
Soby Chacko
db5a303431 Check HeaderMapper bean with a well known name (#753)
* Check HeaderMapper bean with a well known name

* If a custom bean name for header mapper is not provided through the binder property headerMapperBeanName,
  then look for a header mapper bean with the name kafkaHeaderMapper.
* Add tests to verify

Resolves #749

* Addressing PR review comments
2019-09-27 11:08:15 -04:00
Soby Chacko
e549090787 Restoring the avro tests for Kafka Streams 2019-09-23 17:12:31 -04:00
buildmaster
7ff64098a3 Going back to snapshots 2019-09-23 18:12:19 +00:00
buildmaster
cea40bc969 Update SNAPSHOT to 3.0.0.M4 2019-09-23 18:11:47 +00:00
buildmaster
717022e274 Going back to snapshots 2019-09-23 17:49:28 +00:00
buildmaster
d0f1c8d703 Update SNAPSHOT to 3.0.0.M4 2019-09-23 17:48:54 +00:00
Soby Chacko
7a532b2bbd Temporarily disable avro tests due to Schema Registry dependency issues.
Will address these post M4 release
2019-09-23 13:37:57 -04:00
Soby Chacko
3773fa2c05 Update SIK and SK
SIK -> 3.2.0.RC1
    SK - 2.3.0.RC1
2019-09-23 13:03:12 -04:00
buildmaster
5734ce689b Bumping versions 2019-09-18 15:35:31 +00:00
Olga Maciaszek-Sharma
608086ff4d Remove spring.provides. 2019-09-18 13:34:34 +02:00
Soby Chacko
16713b3b4f Rename Serde for message converters
Rename CompositeNonNativeSerde to MessageConverterDelegateSerde
Deprecate CompositeNonNativeSerde
2019-09-16 21:27:04 -04:00
Soby Chacko
2432a7309b Remove an unnecessary mapValue call
In Kafka Streams binder, with native decoding being the default in 3.0 and going forward,
the topology depth of the Kafka Streams processors have much reduced compared to the topology
generated when using non-native decoding. There was still an extra unncessary processor in the
topology even when the deserialization done natively. Addressing that issue.

With this change, the topology generated is equivalent to the native Kafka Streams applications.

Resolves #682
2019-09-13 15:14:59 -04:00
Soby Chacko
6fdb0001d6 Update SIK to 3.2.0 snapshot
Resolves #744

* Address deprecations in the pollable consumer
* Introduce a property for pollable time out in KafkaConsuerProperties
* Fix tests
* Add docs
* Addressin PR review comments
2019-09-12 15:31:13 -04:00
Gary Russell
f2ab4a07c6 GH-70: Support Batch Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/70
2019-09-11 15:31:18 -04:00
Soby Chacko
584115580b Adding isPresent around optionals 2019-09-11 14:54:11 -04:00
Oleg Zhurakousky
752b509374 Minor cleanup and polishing
Resolves #740
2019-09-10 13:34:30 +02:00
Soby Chacko
96062b23f2 State store beans registered with StreamListener
When StreamListener based processors used in Kafka Streams
applications, it is not possible to register state store beans
using StateStoreBuilder. This is allowed in the functional model.
This method of providing custom state stores is desired as this
gives the user more flexibility in configuring Serde's and other
properties on the state store. Adding this feature to StreamListner
based Kafka Streams processors.

Adding test to verify.

Adding docs.

Resolves #676
2019-09-10 13:34:30 +02:00
Oleg Zhurakousky
da6145f382 Merge pull request #738 from sobychacko/gh-681
Multi binder issues for Kafka Streams table types
2019-09-10 12:45:04 +02:00
Soby Chacko
39a5b25b9c Topic properties not applied on kstream binding
On KStream producer binding, topic properties are not applied
by the provisioner. This is because the extended properties object
that is passed to producer binding is erroneously overwritten by a
default instance. Addressing this issue.

Resolves #684
2019-09-06 16:28:15 -04:00
Soby Chacko
4e250b34cf Multi binder issues for Kafka Streams table types
When KTable or GlobalKTable binders are used in a multi bindder
environment, it has difficulty finding certain beans/properties.
Addressing this issue.

Adding tests.

Resolves #681
2019-09-04 18:10:04 -04:00
Oleg Zhurakousky
10de84f4fe Simplify isSerdeFromStandardDefaults in KeyValueSerdeResolver
Resolves #737
2019-09-04 16:33:54 +02:00
Soby Chacko
309f588325 Addressing Kafka Streams multiple functions issues
Fixing an issue that causes a race condition when multiple functions
are present in a Kafka Streams application by isolating the responsible
proxy factory per function and not shared.

When multiple Kafka Streams functions are present in an application,
it should be possible to set the application id per function.

When an application provides a bean of type Serde, then the binder
should try to introspect that bean to see if it can be matched for
any inbound or outbound serialization.

Adding tests to verify the changes.

Adding docs.

Resolves #734, #735, #736
2019-09-03 19:48:21 -04:00
Oleg Zhurakousky
ca8e086d4c Merge pull request #728 from sobychacko/gh-706
Introducing retry for finding state stores.
2019-08-28 13:37:06 +02:00
Oleg Zhurakousky
cce392aa94 Merge pull request #730 from sobychacko/gh-657
Additional docs for dlq producer properties
2019-08-28 13:36:30 +02:00
Soby Chacko
e53b0f0de9 Fixing Kafka streams binder health indicator tests
Resolves #731
2019-08-28 13:34:34 +02:00
Soby Chacko
413cc4d2b5 Addressing PR review 2019-08-27 17:37:33 -04:00
Soby Chacko
94f76b903f Additional docs for dlq producer properties
Resolves #657
2019-08-26 17:08:01 -04:00
Soby Chacko
8d5f794461 Remove the sleep added in the previous commit 2019-08-26 13:55:52 -04:00
Soby Chacko
00c13c0035 Fixing checkstyle issues 2019-08-26 12:20:39 -04:00
Soby Chacko
e1e46226d5 Fixing a test race condition
The test testDlqWithNativeSerializationEnabledOnDlqProducer fails on CI occasionally.
Adding a sleep of 1 second to give the retrying mechanism enough time to complete.
2019-08-26 12:17:11 -04:00
Walliee
46cbeb1804 Add hook to specify sendTimeoutExpression
resolves #724
2019-08-26 10:47:27 -04:00
Soby Chacko
3e4af104b8 Introducing retry for finding state stores.
During application startup, there are cases when state stores
might take longer time to initialize. The call to find the
state store in the binder (InteractiveQueryService) will fail in
those situations. Providing a basic retry mechanism using which
applicaitons can opt-in for this retry.

Adding properties for retrying state store at the binder level.
Adding docs.
Polishing.

Resolves #706
2019-08-23 18:57:36 -04:00
Soby Chacko
16bb3e2f62 Testing topic provisioning properties for KStream
Resolves #684
2019-08-23 18:53:35 -04:00
Soby Chacko
8145ab19fb Fix checkstyle issues 2019-08-21 17:26:31 -04:00
Gary Russell
930d33aeba Fix ConsumerProducerTransactionTests with SK snap 2019-08-21 09:46:17 -04:00
Gary Russell
fe2a398b8b Fix mock producer.close() for latest SK snapshot 2019-08-21 09:38:25 -04:00
Soby Chacko
24e1fc9722 Kafka Streams functional support and autowiring
There are issues when a bean declared as a function in the Kafka Streams
application tries to autowire a bean through method parameter injection.
Addressing these concerns.

Resolves #726
2019-08-20 18:40:04 -04:00
Soby Chacko
245a43c1d8 Update Spring Kafka to 2.3.0 snapshot
Ignore two tests temporarily
Fixing Kafka Streams tests with co-partitioning issues
2019-08-20 17:40:07 -04:00
Soby Chacko
4d1fed63ee Fix checkstyle issues 2019-08-20 15:34:05 -04:00
Oleg Zhurakousky
786bd3ec62 Merge pull request #723 from sobychacko/kstream-fn-detector-changes
Function detector condition Kafka Streams
2019-08-16 18:44:39 +02:00
Soby Chacko
183f21c880 Function detector condition Kafka Streams
Use BeanFactoryUtils.beanNamesForTypeIncludingAncestors instead of
getBean from BeanFactory which forces the bean creation inside the
function detector condition. There was a race condition in which
applications were unable to autowire beans and use them in functions
while the detector condition was creating the beans. This change will
delay the creation of the function bean until it is needed.
2019-08-16 11:54:42 -04:00
Soby Chacko
18737b8fea Unignoring tests in Kafka Streams binder
Polishing tests
2019-08-14 12:57:18 -04:00
buildmaster
840745e593 Going back to snapshots 2019-08-13 07:58:51 +00:00
buildmaster
2df0377acb Update SNAPSHOT to 3.0.0.M3 2019-08-13 07:57:57 +00:00
Oleg Zhurakousky
164948ad33 Bumped spring-kafka and si-kafka snapshots to milestones 2019-08-13 09:43:09 +02:00
Oleg Zhurakousky
a472845cb4 Adjusted docs with recent s-c-build updates 2019-08-13 09:29:03 +02:00
Oleg Zhurakousky
12db5fc20e Ignored failing tests 2019-08-13 09:24:30 +02:00
Soby Chacko
46f1b41832 Interop between bootstrap server configuration
When using the Kafka Streams binder, if the application chooses to
provide bootstrap server configuration through Kafka binder broker
property, then allow that. This way either type of broker config
works in Kafka Streams binder.

Resolves #401
Resolves #720
2019-08-13 08:45:05 +02:00
Soby Chacko
64ca773a4f Kafka Streams application id changes
Generate a random application id for Kafka Streams binder if the user
doesn't set one for the application. This is useful for development
purposes, as it avoids the creation of an explicit application id.
For production workloads, it is highly recommended to explicitly provide
and application id.

The gnerated application id follows a patter where it uses the function bean
name followed by a random UUID string which is followed by the literal appplicaitonId.
In the case of StreamListener, instead of function bean name, it uses the containing class +
StreamListener method name.

If the binder generates the application id, that information will be logged on the console
at startup.

Resolves #718
Resolves #719
2019-08-13 08:43:07 +02:00
Soby Chacko
a92149f121 Adding BiConsumer support for Kafka Streams binder
Resolves #716
Resolves #717
2019-08-13 08:31:50 +02:00
Gary Russell
2721fe4d05 GH-713: Add test for e2e transactional binder
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/713

The problem was already fixed in SIK.

Resolves #714
2019-08-13 08:29:09 +02:00
Soby Chacko
8907693ffb Revamping Kafka Streams docs for functional style
Resolves #703
Resolves #721
Addressing PR review comments

Addressing PR review comments
2019-08-13 08:17:48 +02:00
Oleg Zhurakousky
ca0bfc028f minor cleanup 2019-08-08 20:59:05 +02:00
Soby Chacko
688f05cbc9 Joining two input KStreams
When joining two input KStreams, the binder throws an exceptin.
Fixing this issue by passing the wrapped target KStream from the
proxy object to the adapted StreamListener method.

Resolves #701
2019-08-08 14:19:21 -04:00
Oleg Zhurakousky
acb4180ea3 GH-1767-core changes related to the disconnection of @StreamMessageConverter 2019-08-06 19:14:49 +02:00
Soby Chacko
77e4087871 Handle Deserialization errors when there is a key (#711)
* Handle Deserialization errors when there is a key

Handle DLQ sending on deserialization errors when there is a key in the record.

Resolves #635

* Adding keySerde information in the functional bindings
2019-08-05 13:58:48 -04:00
Gary Russell
0d339e77b8 GH-683: Update docs 2019-08-02 09:30:36 -04:00
Gary Russell
799eb5be28 GH-683: MessageKey header from payload property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/683

If the `messageKeyExpression` references the payload or entire message, add an
interceptor to evaluate the expression before the payload is converted.

The interceptor is not needed when native encoding is in use because the payload
will be unchanged when it reaches the adapter.
2019-08-01 17:00:05 -04:00
Soby Chacko
2c7615db1f Temporarily ignore a test 2019-08-01 14:32:39 -04:00
Gary Russell
b2b1438ad7 Fix resetOffsets for manual partition assignment 2019-07-19 18:47:41 -04:00
Gary Russell
87399fe904 Updates for latest S-K snapshot
- change `TopicPartitionInitialOffset` to `TopicPartitionOffset`
- when resetting with manual assignment, set the SeekPosition earlier rather than
  relying on direct updat of the `ContainerProperties` field
- set `missingTopicsFatal` to `false` in mock test to avoid log messages about missing bootstrap servers
2019-07-17 18:39:53 -04:00
Soby Chacko
f03e3e17c6 Provide a CollectionSerde implementation
Resolves #702
2019-07-11 17:08:25 -04:00
Gary Russell
48aae76ec9 GH-694: Add recordMetadataChannel (send confirm.)
Resolves #694

Also fix deprecation of `ChannelInterceptorAware`.
2019-07-10 10:48:08 -04:00
Soby Chacko
6976bcd3a8 Docs polishing 2019-07-09 15:59:23 -04:00
Gary Russell
912ab14309 GH-689: Support producer-only transactions
Resolves #689
2019-07-09 15:59:23 -04:00
Oleg Zhurakousky
b4fcb791c3 General polishing and clean up after review
Resolves #692
2019-07-09 20:46:01 +02:00
Soby Chacko
ca3f20ff26 Default multiple input/output binding names in the functional support will be separated usiung underscores (_).
For e.g. process_in, process_in_0, process_out_0 etc.

Adding cleanup config to ktable integration tests.
2019-07-08 19:15:02 -04:00
Soby Chacko
97ecf48867 BiFunction support in Kafka Streams binder
* Introduce the ability to provide BiFunction beans as Kafka Streams processors.
* Bypass Spring Cloud Function catalogue for bean lookup by directly querying the bean factory.
* Add test to verify BiFunction behavior.

Resolves #690
2019-07-08 15:01:07 -04:00
Soby Chacko
4d0b62f839 Functional enhancements in Kafka Streams binder
This PR introduces some fundamental changes in the way functional model of Kafka Streams applications
are supported in the binder. For the most part, this PR is comprised of the following changes.

* Remove the usage of EnableBinding in Kafka Streams based Spring Cloud Stream applications where
  a bean of type java.util.function.Function or java.util.function.Consumer is provides (instead of
  a StreamListener). For StreamListener based Kafka Streams applications, EnableBinding is still
  necessary and required.

* Target types (KStream, KTable, GlobalKTable) will be inferred and bound by the binder through
  a new binder specific bindable proxy factory.

* By deault input bindings are named as <functionName>-input for functions with single input
  and <functionName>-input-0...<functionName>-input-n for functions with n-1 number of inputs.

* By deault output bindings are named as <functionName>-output for functions with single output
  and <functionName>-output-0...<functionName>-output-n for functions with n-1 number of outputs.

* If applications prefer custom input binding names, the defaults can be overridden through
  spring.cloud.stream.function.inputBindings.<funcationName>. Similarly if custom outpub binding names
  are needed, then that can be done through spring.cloud.streamfunction.outputBindings.<functionName>.

* Test changes

* Refactoring and polishing

Resolves #688
2019-07-08 15:01:07 -04:00
Oleg Zhurakousky
592b9a02d8 Binder changes related to GH-1729
addressed PR comment

Resolves #670
2019-07-08 17:32:52 +02:00
Soby Chacko
4ca58bea06 Polishing the previous upgrade commit
* Upgrade Kafaka versions used in tests.
 * EmbeddedKafkaRule changes in tests.
 * KafaTransactionTests changes (createProducer expectations in mockito).
 * Kafka Streams bootstrap server now return an ArrayList instead of String.
   Making the necessary changes in the binder to accommodate this.
 * Kafka Streams state store tests - retention window changes.
2019-07-06 14:24:09 -04:00
Jeff Maxwell
108ccbc572 Update kafka.version, spring-kafka.version
Update kafka.version to 2.3.0 and spring-kafka.version to 2.3.0.BUILD-SNAPSHOT

Resolves #696
2019-07-06 14:24:09 -04:00
buildmaster
303679b9f9 Going back to snapshots 2019-07-03 14:56:50 +00:00
buildmaster
35434c680b Update SNAPSHOT to 3.0.0.M2 2019-07-03 14:56:00 +00:00
Oleg Zhurakousky
e382604b04 Change s-i-kafka to 3.2.0.M3 2019-07-03 16:38:05 +02:00
Soby Chacko
bfe9529a51 Topic provisioning - Kafka streams table types
* Topic provisioning for consumers ignores topic.properties settings if binding type is KTable or GlobalKTable
* Modify tests to verify the behavior during KTable/GlobalKTable bindings
* Polishing

Resolves #687
2019-07-02 13:17:25 -04:00
Gary Russell
d8df388d9f GH-423 Add option to use KafkaHeaders.TOPIC Header
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/423

Add an option to override the default binding destination with the value
of the header, if present.
2019-06-18 16:28:38 -04:00
Gary Russell
569344afa6 GH-677: Fix resetOffsets with concurrency
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/677

The logic for resetting offsets only on the initial assignment used a simple
boolean; this is insufficient when concurrency is > 1.

Use a concurrent set instead to determine whether or not a particular topic/partition
has been sought.

Also, change the `initial` argument on `KafkaBindingRebalanceListener.onPartitionsAssigned()`
to be derived from a `ThreadLocal` and add javadocs about retaining the state by partition.

**backport to all supported versions** (Except `KafkaBindingRebalanceListener` which did
not exist before 2.1.x)

polishing
2019-06-18 16:13:56 -04:00
Oleg Zhurakousky
c1e58d187a Polishing KafkaStreamsBinderUtils
Added registerBean(..) instead of registerSingleton

Resolves #675
2019-06-18 16:21:49 +02:00
Soby Chacko
12afaf1144 Update spring-cloud-build to 2.2.0 snapshot
Update Spring Integration Kafka to 3.2.0 snapshot
2019-06-17 18:43:49 -04:00
Soby Chacko
bbb455946e Refactor common code in Kafka Streams binder.
Both StreamListener and the functional model require many code paths
that are common. Refactoring them for better use and readability.

Resolves #674
2019-06-17 18:11:11 -04:00
Soby Chacko
5dac51df62 Infer SerDes used on input/output when possible
When using native de/serializaion (which is now the default), the binder should infer the common Serde's
used on input and output. The Serdes inferred are - Integer, Long, Short, Double, Float, String, byte[] and
Spring Kafka provided JsonSerde.

Resolves #368

Address PR review comments

Addressing PR review comments

Resolves #672
2019-06-17 14:59:13 +02:00
Oleg Zhurakousky
c1db3d950c Added author tag 2019-06-13 20:19:37 +02:00
Vladislav Fefelov
4d59670096 Use single Executor Service in Kafka Binder health indicator
Resolves #665
2019-06-13 20:19:37 +02:00
Oleg Zhurakousky
d213b4ff2c Merge pull request #669 from sobychacko/gh-651
Use native Serde for Kafka Streams binder as the default
2019-06-12 18:35:03 +02:00
buildmaster
66198e345a Going back to snapshots 2019-06-11 12:07:50 +00:00
Soby Chacko
9c88b4d808 Use native Serde for Kafka Streams binder
In Kafka Streams binder, use the native Serde mechanism as the default instead of the framework
provided message conversion on the input and output bindings. This is to align applications
written using Kafka Streams binder more compatible with native concepts and mechanisms.
Users can still disable native encoding and decoding through configuration.

Amend tests to accommodate the flipped configuration.

Resolves #651
2019-06-10 21:17:38 -04:00
166 changed files with 16860 additions and 8589 deletions

1
.gitignore vendored
View File

@@ -23,3 +23,4 @@ _site/
dump.rdb
.apt_generated
artifacts
.sts4-cache

51
.mvn/wrapper/MavenWrapperDownloader.java vendored Executable file → Normal file
View File

@@ -1,22 +1,18 @@
/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/
* Copyright 2007-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
@@ -24,11 +20,12 @@ import java.util.Properties;
public class MavenWrapperDownloader {
private static final String WRAPPER_VERSION = "0.5.6";
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL =
"https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar";
private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/"
+ WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
@@ -76,13 +73,13 @@ public class MavenWrapperDownloader {
}
}
}
System.out.println("- Downloading from: : " + url);
System.out.println("- Downloading from: " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if(!outputFile.getParentFile().exists()) {
if(!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output direcrory '" + outputFile.getParentFile().getAbsolutePath() + "'");
"- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
@@ -98,6 +95,16 @@ public class MavenWrapperDownloader {
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) {
String username = System.getenv("MVNW_USERNAME");
char[] password = System.getenv("MVNW_PASSWORD").toCharArray();
Authenticator.setDefault(new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, password);
}
});
}
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());

BIN
.mvn/wrapper/maven-wrapper.jar vendored Executable file → Normal file

Binary file not shown.

3
.mvn/wrapper/maven-wrapper.properties vendored Executable file → Normal file
View File

@@ -1 +1,2 @@
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip
wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar

View File

@@ -4,6 +4,7 @@ Manual changes to this file will be lost when it is generated again.
Edit the files in the src/main/asciidoc/ directory instead.
////
:jdkversion: 1.8
:github-tag: master
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
@@ -17,14 +18,6 @@ image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.sv
// ======================================================================================
//= Overview
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs.
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Apache Kafka Binder
=== Usage
@@ -39,7 +32,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -49,571 +42,20 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
</dependency>
----
=== Overview
== Apache Kafka Streams Binder
The following image shows a simplified diagram of how the Apache Kafka binder operates:
=== Usage
.Kafka Binder
image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"]
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
In addition to support known Kafka producer properties, unknown producer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
+
Default: `false`.
autoCommitOffset::
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it always auto-commits (if auto-commit is enabled).
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
Also see `resetOffsets` (earlier in this list).
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
[source,xml]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
=== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
[[kafka-tombstones]]
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
====
[source, java]
----
@StreamListener(Sink.INPUT)
public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
@Payload(required = false) Customer customer) {
// customer is null if a tombstone record
...
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
= Appendices
[appendix]
[[building]]
@@ -737,4 +179,4 @@ added after the original pull request but before a merge.
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).
// ======================================================================================
// ======================================================================================

View File

@@ -7,319 +7,56 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M1</version>
<version>4.0.0-M1</version>
</parent>
<packaging>pom</packaging>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs</description>
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<spring-doc-resources.version>0.1.1.RELEASE</spring-doc-resources.version>
<spring-asciidoctor-extensions.version>0.1.0.RELEASE
</spring-asciidoctor-extensions.version>
<asciidoctorj-pdf.version>1.5.0-alpha.16</asciidoctorj-pdf.version>
<maven.plugin.plugin.version>3.4</maven.plugin.plugin.version>
<configprops.inclusionPattern>.*stream.*</configprops.inclusionPattern>
<upload-docs-zip.phase>deploy</upload-docs-zip.phase>
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/asciidoc</sourceDirectory>
</build>
<profiles>
<profile>
<id>docs</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>${maven-dependency-plugin.version}</version>
<inherited>false</inherited>
<executions>
<execution>
<id>unpack-docs</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-build-docs
</artifactId>
<version>${spring-cloud-build.version}
</version>
<classifier>sources</classifier>
<type>jar</type>
<overWrite>false</overWrite>
<outputDirectory>${docs.resources.dir}
</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
<execution>
<id>unpack-docs-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>io.spring.docresources</groupId>
<artifactId>spring-doc-resources</artifactId>
<version>${spring-doc-resources.version}</version>
<type>zip</type>
<overWrite>true</overWrite>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-asciidoc-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
<resources>
<resource>
<directory>src/main/asciidoc</directory>
<filtering>false</filtering>
<excludes>
<exclude>ghpages.sh</exclude>
</excludes>
</resource>
</resources>
</configuration>
</execution>
</executions>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${asciidoctor-maven-plugin.version}</version>
<inherited>false</inherited>
<dependencies>
<dependency>
<groupId>io.spring.asciidoctor</groupId>
<artifactId>spring-asciidoctor-extensions</artifactId>
<version>${spring-asciidoctor-extensions.version}</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>${asciidoctorj-pdf.version}</version>
</dependency>
</dependencies>
<configuration>
<sourceDirectory>${project.build.directory}/refdocs/</sourceDirectory>
<sourceDocumentName>${docs.main}.adoc</sourceDocumentName>
<attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
<!-- <docs-url>https://cloud.spring.io/</docs-url> -->
<!-- <docs-version></docs-version> -->
<docs-version>${project.version}/</docs-version>
<docs-url>https://cloud.spring.io/spring-cloud-static/</docs-url>
</attributes>
</configuration>
<executions>
<execution>
<id>generate-html-documentation</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html5</backend>
<sourceHighlighter>highlight.js</sourceHighlighter>
<doctype>book</doctype>
<attributes>
// these attributes are required to use the doc resources
<docinfo>shared</docinfo>
<stylesdir>css/</stylesdir>
<stylesheet>spring.css</stylesheet>
<linkcss>true</linkcss>
<icons>font</icons>
<highlightjsdir>js/highlight</highlightjsdir>
<highlightjs-theme>atom-one-dark-reasonable</highlightjs-theme>
<allow-uri-read>true</allow-uri-read>
<nofooter />
<toc>left</toc>
<toc-levels>4</toc-levels>
<spring-cloud-version>${project.version}</spring-cloud-version>
<sectlinks>true</sectlinks>
</attributes>
<outputFile>${docs.main}.html</outputFile>
</configuration>
</execution>
<execution>
<id>generate-docbook</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
<execution>
<id>generate-index</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>${maven-antrun-plugin.version}</version>
<dependencies>
<dependency>
<groupId>ant-contrib</groupId>
<artifactId>ant-contrib</artifactId>
<version>1.0b3</version>
<exclusions>
<exclusion>
<groupId>ant</groupId>
<artifactId>ant</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.tigris.antelope</groupId>
<artifactId>antelopetasks</artifactId>
<version>3.2.10</version>
</dependency>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
<version>1.7.17</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj</artifactId>
<version>1.5.8</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>readme</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<java classname="org.jruby.Main" failonerror="yes">
<arg
value="${docs.resources.dir}/ruby/generate_readme.sh" />
<arg value="-o" />
<arg value="${main.basedir}/README.adoc" />
</java>
</target>
</configuration>
</execution>
<execution>
<id>assert-no-unresolved-links</id>
<phase>prepare-package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<fileset id="unresolved.file"
dir="${basedir}/target/generated-docs/" includes="**/*.html">
<contains text="Unresolved" />
</fileset>
<fail message="[Unresolved] Found...failing">
<condition>
<resourcecount when="greater" count="0"
refid="unresolved.file" />
</condition>
</fail>
</target>
</configuration>
</execution>
<execution>
<id>setup-maven-properties</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<taskdef
resource="net/sf/antcontrib/antcontrib.properties" />
<taskdef name="stringutil"
classname="ise.antelope.tasks.StringUtilTask" />
<var name="version-type" value="${project.version}" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp=".*\.(.*)"
replace="\1" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(M)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(RC)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="BUILD-(.*)"
replace="SNAPSHOT" />
<stringutil string="${version-type}"
property="spring-cloud-repo">
<lowercase />
</stringutil>
<var name="github-tag" value="v${project.version}" />
<propertyregex property="github-tag"
override="true" input="${github-tag}" regexp=".*SNAPSHOT"
replace="master" />
</target>
</configuration>
</execution>
<execution>
<id>copy-css</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>generate-documentation-index</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>copy-generated-html</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<inherited>false</inherited>
<artifactId>maven-deploy-plugin</artifactId>
</plugin>
</plugins>
</build>

View File

@@ -11,8 +11,43 @@ image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.sv
// ======================================================================================
//= Overview
include::overview.adoc[]
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
== Apache Kafka Streams Binder
=== Usage
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
= Appendices
[appendix]

View File

@@ -0,0 +1,64 @@
|===
|Name | Default | Description
|spring.cloud.stream.binders | | Additional per-binder properties (see {@link BinderProperties}) if more then one binder of the same type is used (i.e., connect to multiple instances of RabbitMq). Here you can specify multiple binder configurations, each with different environment settings. For example; spring.cloud.stream.binders.rabbit1.environment. . . , spring.cloud.stream.binders.rabbit2.environment. . .
|spring.cloud.stream.binding-retry-interval | `30` | Retry interval (in seconds) used to schedule binding attempts. Default: 30 sec.
|spring.cloud.stream.bindings | | Additional binding properties (see {@link BinderProperties}) per binding name (e.g., 'input`). For example; This sets the content-type for the 'input' binding of a Sink application: 'spring.cloud.stream.bindings.input.contentType=text/plain'
|spring.cloud.stream.default-binder | | The name of the binder to use by all bindings in the event multiple binders available (e.g., 'rabbit').
|spring.cloud.stream.dynamic-destination-cache-size | `10` | The maximum size of Least Recently Used (LRU) cache of dynamic destinations. Once this size is reached, new destinations will trigger the removal of old destinations. Default: 10
|spring.cloud.stream.dynamic-destinations | `[]` | A list of destinations that can be bound dynamically. If set, only listed destinations can be bound.
|spring.cloud.stream.function.batch-mode | `false` |
|spring.cloud.stream.function.bindings | |
|spring.cloud.stream.input-bindings | | A semi-colon delimited string to explicitly define input bindings (specifically for cases when there is no implicit trigger to create such bindings such as Function, Supplier or Consumer).
|spring.cloud.stream.instance-count | `1` | The number of deployed instances of an application. Default: 1. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-count" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index | `0` | The instance id of the application: a number from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index-list | | A list of instance id's from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index-list" where 'foo' is the name of the binding. This setting will override the one set in 'spring.cloud.stream.instance-index'
|spring.cloud.stream.integration.message-handler-not-propagated-headers | | Message header names that will NOT be copied from the inbound message.
|spring.cloud.stream.kafka.binder.authorization-exception-retry-interval | | Time between retries after AuthorizationException is caught in the ListenerContainer; defalt is null which disables retries. For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
|spring.cloud.stream.kafka.binder.auto-add-partitions | `false` |
|spring.cloud.stream.kafka.binder.auto-alter-topics | `false` |
|spring.cloud.stream.kafka.binder.auto-create-topics | `true` |
|spring.cloud.stream.kafka.binder.brokers | `[localhost]` |
|spring.cloud.stream.kafka.binder.certificate-store-directory | | When a certificate store location is given as classpath URL (classpath:), then the binder moves the resource from the classpath location inside the JAR to a location on the filesystem. If this value is set, then this location is used, otherwise, the certificate file is copied to the directory returned by java.io.tmpdir.
|spring.cloud.stream.kafka.binder.configuration | | Arbitrary kafka properties that apply to both producers and consumers.
|spring.cloud.stream.kafka.binder.consider-down-when-any-partition-has-no-leader | `false` |
|spring.cloud.stream.kafka.binder.consumer-properties | | Arbitrary kafka consumer properties.
|spring.cloud.stream.kafka.binder.header-mapper-bean-name | | The bean name of a custom header mapper to use instead of a {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
|spring.cloud.stream.kafka.binder.headers | `[]` |
|spring.cloud.stream.kafka.binder.health-timeout | `60` | Time to wait to get partition information in seconds; default 60.
|spring.cloud.stream.kafka.binder.jaas | |
|spring.cloud.stream.kafka.binder.min-partition-count | `1` |
|spring.cloud.stream.kafka.binder.producer-properties | | Arbitrary kafka producer properties.
|spring.cloud.stream.kafka.binder.replication-factor | `-1` |
|spring.cloud.stream.kafka.binder.required-acks | `1` |
|spring.cloud.stream.kafka.binder.transaction.producer.batch-timeout | |
|spring.cloud.stream.kafka.binder.transaction.producer.buffer-size | |
|spring.cloud.stream.kafka.binder.transaction.producer.compression-type | |
|spring.cloud.stream.kafka.binder.transaction.producer.configuration | |
|spring.cloud.stream.kafka.binder.transaction.producer.error-channel-enabled | |
|spring.cloud.stream.kafka.binder.transaction.producer.header-mode | |
|spring.cloud.stream.kafka.binder.transaction.producer.header-patterns | |
|spring.cloud.stream.kafka.binder.transaction.producer.message-key-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-count | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-key-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-key-extractor-name | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-selector-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-selector-name | |
|spring.cloud.stream.kafka.binder.transaction.producer.required-groups | |
|spring.cloud.stream.kafka.binder.transaction.producer.sync | |
|spring.cloud.stream.kafka.binder.transaction.producer.topic | |
|spring.cloud.stream.kafka.binder.transaction.producer.use-native-encoding | |
|spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix | |
|spring.cloud.stream.kafka.bindings | |
|spring.cloud.stream.metrics.export-properties | | List of properties that are going to be appended to each message. This gets populate by onApplicationEvent, once the context refreshes to avoid overhead of doing per message basis.
|spring.cloud.stream.metrics.key | | The name of the metric being emitted. Should be an unique value per application. Defaults to: ${spring.application.name:${vcap.application.name:${spring.config.name:application}}}.
|spring.cloud.stream.metrics.meter-filter | | Pattern to control the 'meters' one wants to capture. By default all 'meters' will be captured. For example, 'spring.integration.*' will only capture metric information for meters whose name starts with 'spring.integration'.
|spring.cloud.stream.metrics.properties | | Application properties that should be added to the metrics payload For example: `spring.application**`.
|spring.cloud.stream.metrics.schedule-interval | `60s` | Interval expressed as Duration for scheduling metrics snapshots publishing. Defaults to 60 seconds
|spring.cloud.stream.output-bindings | | A semi-colon delimited string to explicitly define output bindings (specifically for cases when there is no implicit trigger to create such bindings such as Function, Supplier or Consumer).
|spring.cloud.stream.override-cloud-connectors | `false` | This property is only applicable when the cloud profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder detects a suitable bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and uses it for creating connections (usually through Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.
|spring.cloud.stream.pollable-source | `none` | A semi-colon delimited list of binding names of pollable sources. Binding names follow the same naming convention as functions. For example, name '...pollable-source=foobar' will be accessible as 'foobar-iin-0'' binding
|spring.cloud.stream.sendto.destination | `none` | The name of the header used to determine the name of the output destination
|spring.cloud.stream.source | | A semi-colon delimited string representing the names of the sources based on which source bindings will be created. This is primarily to support cases where source binding may be required without providing a corresponding Supplier. (e.g., for cases where the actual source of data is outside of scope of spring-cloud-stream - HTTP -> Stream) @deprecated use {@link #outputBindings}
|===

View File

@@ -1,12 +1,65 @@
[[kafka-dlq-processing]]
=== Dead-Letter Topic Processing
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
[[dlq-partition-selection]]
==== Dead-Letter Topic Partition Selection
By default, records are published to the Dead-Letter topic using the same partition as the original record.
This means the Dead-Letter topic must have at least as many partitions as the original record.
To change this behavior, add a `DlqPartitionFunction` implementation as a `@Bean` to the application context.
Only one such bean can be present.
The function is provided with the consumer group, the failed `ConsumerRecord` and the exception.
For example, if you always want to route to partition 0, you might use:
====
[source, java]
----
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, record, ex) -> 0;
}
----
====
NOTE: If you set a consumer binding's `dlqPartitions` property to 1 (and the binder's `minPartitionCount` is equal to `1`), there is no need to supply a `DlqPartitionFunction`; the framework will always use partition 0.
If you set a consumer binding's `dlqPartitions` property to a value greater than `1` (or the binder's `minPartitionCount` is greater than `1`), you **must** provide a `DlqPartitionFunction` bean, even if the partition count is the same as the original topic's.
It is also possible to define a custom name for the DLQ topic.
In order to do so, create an implementation of `DlqDestinationResolver` as a `@Bean` to the application context.
When the binder detects such a bean, that takes precedence, otherwise it will use the `dlqName` property.
If neither of these are found, it will default to `error.<destination>.<group>`.
Here is an example of `DlqDestinationResolver` as a `@Bean`.
====
[source]
----
@Bean
public DlqDestinationResolver dlqDestinationResolver() {
return (rec, ex) -> {
if (rec.topic().equals("word1")) {
return "topic1-dlq";
}
else {
return "topic2-dlq";
}
};
}
----
====
One important thing to keep in mind when providing an implementation for `DlqDestinationResolver` is that the provisioner in the binder will not auto create topics for the application.
This is because there is no way for the binder to infer the names of all the DLQ topics the implementation might send to.
Therefore, if you provide DLQ names using this strategy, it is the application's responsibility to ensure that those topics are created beforehand.
[[dlq-handling]]
==== Handling Records in a Dead-Letter Topic
Because the framework cannot anticipate how users would want to dispose of dead-lettered messages, it does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.
The application is another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
It exits when no messages are received for 5 seconds.
The examples assume the original destination is `so8400out` and the consumer group is `so8400`.
@@ -88,7 +141,7 @@ public class ReRouteDlqKApplication implements CommandLineRunner {
int count = this.processed.get();
Thread.sleep(5000);
if (count == this.processed.get()) {
System.out.println("Idle, terminating");
System.out.println("Idle, exiting");
return;
}
}

View File

@@ -4,7 +4,7 @@ set -e
# Set default props like MAVEN_PATH, ROOT_FOLDER etc.
function set_default_props() {
# The script should be executed from the root folder
# The script should be run from the root folder
ROOT_FOLDER=`pwd`
echo "Current folder is ${ROOT_FOLDER}"
@@ -65,7 +65,7 @@ function build_docs_if_applicable() {
}
# Get the name of the `docs.main` property
# Get whitelisted branches - assumes that a `docs` module is available under `docs` profile
# Get allowed branches - assumes that a `docs` module is available under `docs` profile
function retrieve_doc_properties() {
MAIN_ADOC_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
@@ -75,14 +75,14 @@ function retrieve_doc_properties() {
echo "Extracted 'main.adoc' from Maven build [${MAIN_ADOC_VALUE}]"
WHITELIST_PROPERTY=${WHITELIST_PROPERTY:-"docs.whitelisted.branches"}
WHITELISTED_BRANCHES_VALUE=$("${MAVEN_PATH}"mvn -q \
ALLOW_PROPERTY=${ALLOW_PROPERTY:-"docs.allowed.branches"}
ALLOWED_BRANCHES_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
-Dexec.args="\${${WHITELIST_PROPERTY}}" \
-Dexec.args="\${${ALLOW_PROPERTY}}" \
org.codehaus.mojo:exec-maven-plugin:1.3.1:exec \
-P docs \
-pl docs)
echo "Extracted '${WHITELIST_PROPERTY}' from Maven build [${WHITELISTED_BRANCHES_VALUE}]"
echo "Extracted '${ALLOW_PROPERTY}' from Maven build [${ALLOWED_BRANCHES_VALUE}]"
}
# Stash any outstanding changes
@@ -148,9 +148,9 @@ function copy_docs_for_current_version() {
else
echo -e "Current branch is [${CURRENT_BRANCH}]"
# https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
if [[ ",${WHITELISTED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
if [[ ",${ALLOWED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
mkdir -p ${ROOT_FOLDER}/${CURRENT_BRANCH}
echo -e "Branch [${CURRENT_BRANCH}] is whitelisted! Will copy the current docs to the [${CURRENT_BRANCH}] folder"
echo -e "Branch [${CURRENT_BRANCH}] is allowed! Will copy the current docs to the [${CURRENT_BRANCH}] folder"
for f in docs/target/generated-docs/*; do
file=${f#docs/target/generated-docs/*}
if ! git ls-files -i -o --exclude-standard --directory | grep -q ^$file$; then
@@ -169,7 +169,7 @@ function copy_docs_for_current_version() {
done
COMMIT_CHANGES="yes"
else
echo -e "Branch [${CURRENT_BRANCH}] is not on the white list! Check out the Maven [${WHITELIST_PROPERTY}] property in
echo -e "Branch [${CURRENT_BRANCH}] is not on the allow list! Check out the Maven [${ALLOW_PROPERTY}] property in
[docs] module available under [docs] profile. Won't commit any changes to gh-pages for this branch."
fi
fi
@@ -250,10 +250,10 @@ the script will work in the following manner:
- if there's no gh-pages / target for docs module then the script ends
- for master branch the generated docs are copied to the root of gh-pages branch
- for any other branch (if that branch is whitelisted) a subfolder with branch name is created
- for any other branch (if that branch is allowed) a subfolder with branch name is created
and docs are copied there
- if the version switch is passed (-v) then a tag with (v) prefix will be retrieved and a folder
with that version number will be created in the gh-pages branch. WARNING! No whitelist verification will take place
with that version number will be created in the gh-pages branch. WARNING! No allow verification will take place
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
switch to gh-pages of that repo and copy the generated docs to `docs/<project-name>/<version>`
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
@@ -327,4 +327,4 @@ build_docs_if_applicable
retrieve_doc_properties
stash_changes
add_docs_from_target
checkout_previous_branch
checkout_previous_branch

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -40,7 +40,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -49,7 +49,7 @@ Also, 0.11.x.x does not support the `autoAddPartitions` property.
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation.
==== Kafka Binder Properties
@@ -106,7 +106,11 @@ spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`.
Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas.
Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum.
+
Default: `-1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
@@ -135,14 +139,31 @@ Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader::
Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader.
+
Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
This is true for both broker level certificates (`ssl.truststore.location` and `ssl.keystore.location`) and certificates intended for schema registry (`schema.registry.ssl.truststore.location` and `schema.registry.ssl.keystore.location`).
Keep in mind that the truststore and keystore classpath locations must be provided under `spring.cloud.stream.kafka.binder.configuration...`.
For example, `spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location`, ``spring.cloud.stream.kafka.binder.configuration.schema.registry.ssl.truststore.location`, etc.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
@@ -170,9 +191,15 @@ By default, offsets are committed after all records in the batch of records retu
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
This property is deprecated as of 3.1 in favor of using `ackMode`.
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
+
Default: `false`.
autoCommitOffset::
Starting with version 3.1, this property is deprecated.
See `ackMode` for more details on alternatives.
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
@@ -181,39 +208,56 @@ When this property is set to `false`, Kafka binder sets the ack mode to `org.spr
Also see `ackEachRecord`.
+
Default: `true`.
ackMode::
Specify the container ack mode.
This is based on the AckMode enumeration defined in Spring Kafka.
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it always auto-commits (if auto-commit is enabled).
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
In pollable consumers, if set to `true`, it always auto commits on error.
If not set (the default) or false, it will not auto commit in pollable consumers.
Note that this property is only applicable for pollable consumers.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
Must be false if a `KafkaBindingRebalanceListener` is provided; see <<rebalance-listener>>.
See <<reset-offsets>> for more information about this property.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
Also see `resetOffsets` (earlier in this list).
See <<reset-offsets>> for more information about this property.
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
dlqName::
@@ -223,6 +267,8 @@ Default: null (If not specified, messages that result in errors are forwarded to
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
@@ -262,12 +308,71 @@ topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Default: none (the binder-wide default of -1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
txCommitRecovered::
When using a transactional binder, the offset of a recovered record (e.g. when retries are exhausted and the record is sent to a dead letter topic) will be committed via a new transaction, by default.
Setting this property to `false` suppresses committing the offset of recovered record.
+
Default: true.
commonErrorHandlerBeanName::
`CommonErrorHandler` bean name to use per consumer binding.
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`.
If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply.
i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`.
By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`.
These defaults can be overridden by setting the `startOffset` binding property.
There will be no committed offset(s) the first time the binding is started with a particular `group`.
The other condition where no committed offset exists is if the offset has been expired.
With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group.
See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information.
When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored.
Following are two use cases when this might be used.
1. Consuming from a compacted topic containing key/value pairs.
Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions.
2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running.
Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions.
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets` should not be set to `true`, otherwise, that will cause an error.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
@@ -290,6 +395,13 @@ sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
@@ -297,7 +409,13 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
In the case of a regular processor (`Function<String, String>` or `Function<Message<?>, Message<?>`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below.
`spring.cloud.stream.kafka.bindings.<output-binding-name>.producer.messageKeyExpression: headers['kafka_receivedMessageKey']`
There is an important caveat to keep in mind for reactive functions.
In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages.
You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all.
+
Default: `none`.
headerPatterns::
@@ -311,6 +429,7 @@ For example `!ask,as*` will pass `ash` but not `ask`.
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
topic.properties::
@@ -326,8 +445,22 @@ topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
+
Default: `false`.
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
+
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
+
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
+
Default: null.
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -337,20 +470,38 @@ If a topic already exists with a larger number of partitions than the maximum of
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
closeTimeout::
Timeout in number of seconds to wait for when closing the producer.
+
Default: `30`
allowNonTransactional::
Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process.
This property allows you to override that behavior.
If set to true, records published to this output binding will not be run in a transaction, unless one is already in process.
+
Default: `false`
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
===== Example: Setting `ackMode` to `MANUAL` and Relying on Manual Acknowledgement
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.ackMode` be set to `MANUAL`.
Use the corresponding input channel name for your example.
[source]
@@ -462,6 +613,47 @@ Usually, applications may use principals that do not have administrative rights
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
====== Multi-binder configuration and JAAS
When connecting to multiple clusters in which each one requires separate JAAS configuration, then set the JAAS configuration using the property `sasl.jaas.config`.
When this property is present in the applicaiton, it takes precedence over the other strategies mentioned above.
See this https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients[KIP-85] for more details.
For example, if you have two clusters in your application with separate JAAS configuration, then the following is a template that you can use:
```
spring.cloud.stream:
binders:
kafka1:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\";"
kafka2:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9093
configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"user1\" password=\"user1-secret\";"
kafka.binder:
configuration:
security.protocol: SASL_PLAINTEXT
sasl.mechanism: PLAIN
```
Note that both the Kafka clusters, and the `sasl.jaas.config` values for each of them are different in the above configuration.
See this https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples/kafka-multi-binder-jaas[sample application] for more details on how to setup and run such an application.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
@@ -502,11 +694,65 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
@@ -522,9 +768,25 @@ You can consume these exceptions with your own Spring Integration flow.
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
The metrics provided are based on the Micrometer library.
The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application.
The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application.
```
@Component
class NoOpBindingMeters {
NoOpBindingMeters(MeterRegistry registry) {
registry.config().meterFilter(
MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME));
}
}
```
More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here].
[[kafka-tombstones]]
=== Tombstone Records (null record values)
@@ -544,10 +806,10 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
=== Using a KafkaBindingRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
Starting with version 2.1, if you provide a single `KafkaBindingRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
@@ -593,3 +855,142 @@ public interface KafkaBindingRebalanceListener {
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[retry-and-dlq-processing]]
=== Retry and Dead Letter Processing
By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
There are situations where it is preferable to move this functionality to the listener container, such as:
* The aggregate of retries and delays will exceed the consumer's `max.poll.interval.ms` property, potentially causing a partition rebalance.
* You wish to publish the dead letter to a different Kafka cluster.
* You wish to add retry listeners to the error handler.
* ...
To configure moving this functionality from the binder to the container, define a `@Bean` of type `ListenerContainerWithDlqAndRetryCustomizer`.
This interface has the following methods:
====
[source, java]
----
/**
* Configure the container.
* @param container the container.
* @param destinationName the destination name.
* @param group the group.
* @param dlqDestinationResolver a destination resolver for the dead letter topic (if
* enableDlq).
* @param backOff the backOff using retry properties (if configured).
* @see #retryAndDlqInBinding(String, String)
*/
void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff);
/**
* Return false to move retries and DLQ from the binding to a customized error handler
* using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when
* configured via
* {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}.
* @param destinationName the destination name.
* @param group the group.
* @return true to disable retrie in the binding
*/
default boolean retryAndDlqInBinding(String destinationName, String group) {
return true;
}
----
====
The destination resolver and `BackOff` are created from the binding properties (if configured).
You can then use these to create a custom error handler and dead letter publisher; for example:
====
[source, java]
----
@Bean
ListenerContainerWithDlqAndRetryCustomizer cust(KafkaTemplate<?, ?> template) {
return new ListenerContainerWithDlqAndRetryCustomizer() {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff) {
if (destinationName.equals("topicWithLongTotalRetryConfig")) {
ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(template),
dlqDestinationResolver);
container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff));
}
}
@Override
public boolean retryAndDlqInBinding(String destinationName, String group) {
return !destinationName.contains("topicWithLongTotalRetryConfig");
}
};
}
----
====
Now, only a single retry delay needs to be greater than the consumer's `max.poll.interval.ms` property.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method.
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
[[admin-client-config-customization]]
=== Customizing AdminClient Configuration
As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`.
AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization.
Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer.
Here is an example of providing this customizer bean.
```
@Bean
public AdminClientConfigCustomizer adminClientConfigCustomizer() {
return props -> {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
};
}
```
[[custom-kafka-binder-health-indicator]]
=== Custom Kafka Binder Health Indicator
Kafka binder activates a default health indicator when Spring Boot actuator is on the classpath.
This health indicator checks the health of the binder and any communication issues with the Kafka broker.
If an application wants to disable this default health check implementation and include a custom implementation, then it can provide an implementation for `KafkaBinderHealth` interface.
`KafkaBinderHealth` is a marker interface that extends from `HealthIndicator`.
In the custom implementation, it must provide an implementation for the `health()` method.
The custom implementation must be present in the application configuration as a bean.
When the binder discovers the custom implementation, it will use that instead of the default implementation.
Here is an example of such a custom implementation bean in the application.
```
@Bean
public KafkaBinderHealth kafkaBinderHealthIndicator() {
return new KafkaBinderHealth() {
@Override
public Health health() {
// custom implementation details.
}
};
}
```

View File

@@ -32,10 +32,8 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
// ======================================================================================
*{spring-cloud-stream-version}*
*{project-version}*
[#index-link]
{docs-url}spring-cloud-stream/{docs-version}home.html
= Reference Guide
include::overview.adoc[]
@@ -46,6 +44,8 @@ include::partitions.adoc[]
include::kafka-streams.adoc[]
include::tips.adoc[]
= Appendices
[appendix]
include::building.adoc[]

View File

@@ -0,0 +1,865 @@
== Tips, Tricks and Recipes
=== Simple DLQ with Kafka
==== Problem Statement
As a developer, I want to write a consumer application that processes records from a Kafka topic.
However, if some error occurs in processing, I don't want the application to stop completely.
Instead, I want to send the record in error to a DLT (Dead-Letter-Topic) and then continue processing new records.
==== Solution
The solution for this problem is to use the DLQ feature in Spring Cloud Stream.
For the purposes of this discussion, let us assume that the following is our processor function.
```
@Bean
public Consumer<byte[]> processData() {
return s -> {
throw new RuntimeException();
};
```
This is a very trivial function that throws an exception for all the records that it processes, but you can take this function and extend it to any other similar situations.
In order to send the records in error to a DLT, we need to provide the following configuration.
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
```
In order to activate DLQ, the application must provide a group name.
Anonymous consumers cannot use the DLQ facilities.
We also need to enable DLQ by setting the `enableDLQ` property on the Kafka consumer binding to `true`.
Finally, we can optionally provide the DLT name by providing the `dlqName` on Kafka consumer binding, which otherwise default to `input-topic-dlq.my-group.error` in this case.
Note that in the example consumer provided above, the type of the payload is `byte[]`.
By default, the DLQ producer in Kafka binder expects the payload of type `byte[]`.
If that is not the case, then we need to provide the configuration for proper serializer.
For example, let us re-write the consumer function as below:
```
@Bean
public Consumer<String> processData() {
return s -> {
throw new RuntimeException();
};
}
```
Now, we need to tell Spring Cloud Stream, how we want to serialize the data when writing to the DLT.
Here is the modified configuration for this scenario:
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
=== DLQ with Advanced Retry Options
==== Problem Statement
This is similar to the recipe above, but as a developer I would like to configure the way retries are handled.
==== Solution
If you followed the above recipe, then you get the default retry options built into the Kafka binder when the processing encounters an error.
By default, the binder retires for a maximum of 3 attempts with a one second initial delay, 2.0 multiplier with each back off with a max delay of 10 seconds.
You can change all these configurations as below:
```
spring.cloud.stream.bindings.processData-in-0.consumer.maxAtttempts
spring.cloud.stream.bindings.processData-in-0.consumer.backOffInitialInterval
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMultipler
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMaxInterval
```
If you want, you can also provide a list of retryable exceptions by providing a map of boolean values.
For example,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalStateException=true
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalArgumentException=false
```
By default, any exceptions not listed in the map above will be retried.
If that is not desired, then you can disable that by providing,
```
spring.cloud.stream.bindings.processData-in-0.consumer.defaultRetryable=false
```
You can also provide your own `RetryTemplate` and mark it as `@StreamRetryTemplate` which will be scanned and used by the binder.
This is useful when you want more sophisticated retry strategies and policies.
If you have multiple `@StreamRetryTemplate` beans, then you can specify which one your binding wants by using the property,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retry-template-name=<your-retry-template-bean-name>
```
=== Handling Deserialization errors with DLQ
==== Problem Statement
I have a processor that encounters a deserilzartion exception in Kafka consumer.
I would expect that the Spring Cloud Stream DLQ mechanism will catch that scenario, but it does not.
How can I handle this?
==== Solution
The normal DLQ mechanism offered by Spring Cloud Stream will not help when Kafka consumer throws an irrecoverable deserialization excepion.
This is because, this exception happens even before the consumer's `poll()` method returns.
Spring for Apache Kafka project offers some great ways to help the binder with this situation.
Let us explore those.
Assuming this is our function:
```
@Bean
public Consumer<String> functionName() {
return s -> {
System.out.println(s);
};
}
```
It is a trivial function that takes a `String` parameter.
We want to bypass the message converters provided by Spring Cloud Stream and want to use native deserializers instead.
In the case of `String` types, it does not make much sense, but for more complex types like AVRO etc. you have to rely on external deserializers and therefore want to delegate the conversion to Kafka.
Now when the consumer receives the data, let us assume that there is a bad record that causes a deserilziation errror, maybe someone passed an `Integer` instead of a `String` for example.
In that case, if you don't do something in the application, the excption will be propagated through the chain and your application will exit eventually.
In order to handle this, you can add a `ListenerContainerCustomizer` `@Bean` that configures a `SeekToCurrentErrorHandler`.
This `SeekToCurrentErrorHandler` is configured with a `DeadLetterPublishingRecoverer`.
We also need to configure an `ErrorHandlingDeserializer` for the consumer.
That sounds like a lot of complex things, but in reality, it boils down to these 3 beans in this case.
```
@Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
```
```
@Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
```
```
@Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
```
Let us analyze each of them.
The first one is the `ListenerContainerCustomizer` bean that takes a `SeekToCurrentErrorHandler`.
The container is now customized with that particular error handler.
You can learn more about container customization https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_advanced_consumer_configuration[here].
The second bean is the `SeekToCurrentErrorHandler` that is configured with a publishing to a `DLT`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#seek-to-current[here] for more details on `SeekToCurrentErrorHandler`.
The third bean is the `DeadLetterPublishingRecoverer` that is ultimately responsible for sending to the `DLT`.
By default, the `DLT` topic is named as the ORIGINAL_TOPIC_NAME.DLT.
You can change that though.
See the https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters[docs] for more details.
We also need to configure an https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handling-deserializer[ErrorHandlingDeserializer] through application config.
The `ErrorHandlingDeserializer` delegates to the actual deserializer.
In case of errors, it sets key/value of the record to be null and includes the raw bytes of the message.
It then sets the exception in a header and passes this record to the listener, which then calls the registered error handler.
Following is the configuration required:
```
spring.cloud.stream:
function:
definition: functionName
bindings:
functionName-in-0:
group: group-name
destination: input-topic
consumer:
use-native-decoding: true
kafka:
bindings:
functionName-in-0:
consumer:
enableDlq: true
dlqName: dlq-topic
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
configuration:
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
```
We are providing the `ErrorHandlingDeserializer` through the `configuration` property on the binding.
We are also indicating that the actual deserializer to delegate is the `StringDeserializer`.
Keep in mind that none of the dlq properties above are relevant for the discussions in this recipe.
They are purely meant for addressing any application level errors only.
=== Basic offset management in Kafka binder
==== Problem Statement
I want to write a Spring Cloud Stream Kafka consumer applicaiton and not sure about how it manages Kafka consumer offsets.
Can you exaplain?
==== Solution
We encourage you read the https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current/reference/html/spring-cloud-stream-binder-kafka.html#reset-offsets[docs] section on this to get a thorough understanding on it.
Here is it in a gist:
Kafka supports two types of offsets to start with by default - `earliest` and `latest`.
Their semantics are self-explanatory from their names.
Assuming you are running the consumer for the first time.
If you miss the group.id in your Spring Cloud Stream application, then it becomes an anonymous consumer.
Whenever, you have an anonymous consumer, in that case, Spring Cloud Stream application by default will start from the `latest` available offset in the topic partition.
On the other hand, if you explicitly specify a group.id, then by default, the Spring Cloud Stream application will start from the `earliest` available offset in the topic partiton.
In both cases above (consumers with explicit groups and anonymous groups), the starting offset can be switched around by using the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.startOffset` and setting it to either `earliest` or `latest`.
Now, assume that you already ran the consumer before and now starting it again.
In this case, the starting offset semantics in the above case do not apply as the consumer finds an already committed offset for the consumer group (In the case of an anonymous consumer, although the application does not provide a group.id, the binder will auto generate one for you).
It simply picks up from the last committed offset onward.
This is true, even when you have a `startOffset` value provided.
However, you can override the default behavior where the consumer starts from the last committed offset by using the `resetOffsets` property.
In order to do that, set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.resetOffsets` to `true` (which is `false` by default).
Then make sure you provide the `startOffset` value (either `earliest` or `latest`).
When you do that and then start the consumer application, each time you start, it starts as if this is starting for the first time and ignore any committed offsets for the partition.
=== Seeking to arbitrary offsets in Kafka
==== Problem Statement
Using Kafka binder, I know that it can set the offset to either `earliest` or `latest`, but I have a requirement to seek the offset to something in the middle, an arbitrary offset.
Is there a way to achieve this using Spring Cloud Stream Kafka biner?
==== Solution
Previously we saw how Kafka binder allows you to tackle basic offset management.
By default, the binder does not allow you to rewind to an arbitrary offset, at least through the mechanism we saw in that reipce.
However, there are some low-level strategies that the binder provides to achieve this use case.
Let's explore them.
First of all, when you want to reset to an arbitrary offset other than `earliest` or `latest`, make sure to leave the `resetOffsets` configuration to its defaults, which is `false`.
Then you have to provide a custom bean of type `KafkaBindingRebalanceListener`, which will be injected into all consumer bindings.
It is an interface that comes with a few default methods, but here is the method that we are interested in:
```
/**
* Invoked when partitions are initially assigned or after a rebalance. Applications
* might only want to perform seek operations on an initial assignment. While the
* 'initial' argument is true for each thread (when concurrency is greater than 1),
* implementations should keep track of exactly which partitions have been sought.
* There is a race in that a rebalance could occur during startup and so a topic/
* partition that has been sought on one thread may be re-assigned to another
* thread and you may not wish to re-seek it at that time.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment on the current thread.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions, boolean initial) {
// do nothing
}
```
Let us look at the details.
In essence, this method will be invoked each time during the initial assignment for a topic partition or after a rebalance.
For better illustration, let us assume that our topic is `foo` and it has 4 partitions.
Initially, we are only starting a single consumer in the group and this consumer will consume from all partitions.
When the consumer starts for the first time, all 4 partitions are getting initially assigned.
However, we do not want to start the partitions to consume at the defaults (`earliest` since we define a group), rather for each partition, we want them to consume after seeking to arbitrary offsets.
Imagine that you have a business case to consume from certain offsets as below.
```
Partition start offset
0 1000
1 2000
2 2000
3 1000
```
This could be achieved by implementing the above method as below.
```
@Override
public void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions, boolean initial) {
Map<TopicPartition, Long> topicPartitionOffset = new HashMap<>();
topicPartitionOffset.put(new TopicPartition("foo", 0), 1000L);
topicPartitionOffset.put(new TopicPartition("foo", 1), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 2), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 3), 1000L);
if (initial) {
partitions.forEach(tp -> {
if (topicPartitionOffset.containsKey(tp)) {
final Long offset = topicPartitionOffset.get(tp);
try {
consumer.seek(tp, offset);
}
catch (Exception e) {
// Handle excpetions carefully.
}
}
});
}
}
```
This is just a rudimentary implementation.
Real world use cases are much more complex than this and you need to adjust accordingly, but this certainly gives you a basic sketch.
When consumer `seek` fails, it may throw some runtime exceptions and you need to decide what to do in those cases.
==== What if we start a second consumer with the same group id?
When we add a second consumer, a rebalance will occur and some partitions will be moved around.
Let's say that the new consumer gets partitions `2` and `3`.
When this new Spring Cloud Stream consumer calls this `onPartitionsAssigned` method, it will see that this is the initial assignment for partititon `2` and `3` on this consumer.
Therefore, it will do the seek operation becuase of the conditional check on the `initial` argument.
In the case of the first consumer, it now only has partitons `0` and `1`
However, for this consumer it was simply a rebalance event and not considered as an intial assignment.
Thus, it will not re-seek to the given offsets because of the conditional check on the `initial` argument.
=== How do I manually acknowledge using Kafka binder?
==== Problem Statement
Using Kafka binder, I want to manually acknowledge messages in my consumer.
How do I do that?
==== Solution
By default, Kafka binder delegates to the default commit settings in Spring for Apache Kafka project.
The default `ackMode` in Spring Kafka is `batch`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets[here] for more details on that.
There are situations in which you want to disable this default commit behavior and rely on manual commits.
Following steps allow you to do that.
Set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.ackMode` to either `MANUAL` or `MANUAL_IMMEDIATE`.
When it is set like that, then there will be a header called `kafka_acknowledgment` (from `KafkaHeaders.ACKNOWLEDGMENT`) present in the message received by the consumer method.
For example, imagine this as your consumer method.
```
@Bean
public Consumer<Message<String>> myConsumer() {
return msg -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
};
}
```
Then you set the property `spring.cloud.stream.bindings.myConsumer-in-0.consumer.ackMode` to `MANUAL` or `MANUAL_IMMEDIATE`.
=== How do I override the default binding names in Spring Cloud Stream?
==== Problem Statement
Spring Cloud Stream creates default bindings based on the function definition and signature, but how do I override these to more domain friendly names?
==== Solution
Assume that following is your function signature.
```
@Bean
public Function<String, String> uppercase(){
...
}
```
By default, Spring Cloud Stream will create the bindings as below.
1. uppercase-in-0
2. uppercase-out-0
You can override these bindings to something by using the following properties.
```
spring.cloud.stream.function.bindings.uppercase-in-0=my-transformer-in
spring.cloud.stream.function.bindings.uppercase-out-0=my-transformer-out
```
After this, all binding properties must be made on the new names, `my-transformer-in` and `my-transformer-out`.
Here is another example with Kafka Streams and multiple inputs.
```
@Bean
public BiFunction<KStream<String, Order>, KTable<String, Account>, KStream<String, EnrichedOrder>> processOrder() {
...
}
```
By default, Spring Cloud Stream will create three different binding names for this function.
1. processOrder-in-0
2. processOrder-in-1
3. processOrder-out-0
You have to use these binding names each time you want to set some configuration on these bindings.
You don't like that, and you want to use more domain-friendly and readable binding names, for example, something like.
1. orders
2. accounts
3. enrichedOrders
You can easily do that by simply setting these three properties
1. spring.cloud.stream.function.bindings.processOrder-in-0=orders
2. spring.cloud.stream.function.bindings.processOrder-in-1=accounts
3. spring.cloud.stream.function.bindings.processOrder-out-0=enrichedOrders
Once you do that, it overrides the default binding names and any properties that you want to set on them must be on these new binding names.
=== How do I send a message key as part of my record?
==== Problem Statement
I need to send a key along with the payload of the record, is there a way to do that in Spring Cloud Stream?
==== Solution
It is often necessary that you want to send associative data structure like a map as the record with a key and value.
Spring Cloud Stream allows you to do that in a straightforward manner.
Following is a basic blueprint for doing this, but you may want to adapt it to your paricular use case.
Here is sample producer method (aka `Supplier`).
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
This is a trivial function that sends a message with a `String` payload, but also with a key.
Note that we set the key as a message header using `KafkaHeaders.MESSAGE_KEY`.
If you want to change the key from the default `kafka_messageKey`, then in the configuration, we need to specify this property:
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.messageKeyExpression=headers['my-special-key']
```
Please note that we use the binding name `supplier-out-0` since that is our function name, please update accordingly.
Then, we use this new key when we produce the message.
=== How do I use native serializer and deserializer instead of message conversion done by Spring Cloud Stream?
==== Problem Statement
Instead of using the message converters in Spring Cloud Stream, I want to use native Serializer and Deserializer in Kafka.
By default, Spring Cloud Stream takes care of this conversion using its internal built-in message converters.
How can I bypass this and delegate the responsibility to Kafka?
==== Solution
This is really easy to do.
All you have to do is to provide the following property to enable native serialization.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.useNativeEncoding: true
```
Then, you need to also set the serailzers.
There are a couple of ways to do this.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
or using the binder configuration.
```
spring.cloud.stream.kafka.binder.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.binder.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
When using the binder way, it is applied against all the bindings whereas setting them at the bindings are per binding.
On the deserializing side, you just need to provide the deserializers as configuration.
For example,
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.configurarion.key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
```
You can also set them at the binder level.
There is an optional property that you can set to force native decoding.
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.useNativeDecoding: true
```
However, in the case of Kafka binder, this is unncessary, as by the time it reaches the binder, Kafka already deserializes them using the configured deserializers.
=== Explain how offset resetting work in Kafka Streams binder
==== Problem Statement
By default, Kafka Streams binder always starts from the earliest offset for a new consumer.
Sometimes, it is beneficial or required by the application to start from the latest offset.
Kafka Streams binder allows you to do that.
==== Solution
Before we look at the solution, let us look at the following scenario.
```
@Bean
public BiConsumer<KStream<Object, Object>, KTable<Object, Object>> myBiConsumer{
(s, t) -> s.join(t, ...)
...
}
```
We have a `BiConsumer` bean that requires two input bindings.
In this case, the first binding is for a `KStream` and the second one is for a `KTable`.
When running this application for the first time, by default, both bindings start from the `earliest` offset.
What about I want to start from the `latest` offset due to some requirements?
You can do this by enabling the following properties.
```
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-0.consumer.startOffset: latest
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-1.consumer.startOffset: latest
```
If you want only one binding to start from the `latest` offset and the other to consumer from the default `earliest`, then leave the latter binding out from the configuration.
Keep in mind that, once there are committed offsets, these setting are *not* honored and the committed offsets take precedence.
=== Keeping track of successful sending of records (producing) in Kafka
==== Problem Statement
I have a Kafka producer application and I want to keep track of all my successful sedings.
==== Solution
Let us assume that we have this following supplier in the application.
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
Then, we need to define a new `MessageChannel` bean to capture all the successful send information.
```
@Bean
public MessageChannel fooRecordChannel() {
return new DirectChannel();
}
```
Next, define this property in the application configuration to provide the bean name for the `recordMetadataChannel`.
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.recordMetadataChannel: fooRecordChannel
```
At this point, successful sent information will be sent to the `fooRecordChannel`.
You can write an `IntegrationFlow` as below to see the information.
```
@Bean
public IntegrationFlow integrationFlow() {
return f -> f.channel("fooRecordChannel")
.handle((payload, messageHeaders) -> payload);
}
```
In the `handle` method, the payload is what got sent to Kafka and the message headers contain a special key called `kafka_recordMetadata`.
Its value is a `RecordMetadata` that contains information about topic partition, current offset etc.
=== Adding custom header mapper in Kafka
==== Problem Statement
I have a Kafka producer application that sets some headers, but they are missing in the consumer application. Why is that?
==== Solution
Under normal circumstances, this should be fine.
Imagine, you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("foo", "bar").build();
}
```
On the consumer side, you should still see the header "foo", and the following should not give you any issues.
```
@Bean
public Consumer<Message<String>> consume() {
return s -> {
final String foo = (String)s.getHeaders().get("foo");
System.out.println(foo);
};
}
```
If you provide a https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.3/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties[custom header mapper] in the application, then this won't work.
Let's say you have an empty `KafkaHeaderMapper` in the application.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
}
};
}
```
If that is your implementation, then you will miss the `foo` header on the consumer.
Chances are that, you may have some logic inside those `KafkaHeaderMapper` methods.
You need the following to populate the `foo` header.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String foo = (String) headers.get("foo");
target.add("foo", foo.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header foo = source.lastHeader("foo");
target.put("foo", new String(foo.value()));
}
}
```
That will properly populate the `foo` header from the producer to consumer.
==== Special note on the id header
In Spring Cloud Stream, the `id` header is a special header, but some applications may want to have special custom id headers - something like `custom-id` or `ID` or `Id`.
The first one (`custom-id`) will propagate without any custom header mapper from producer to consumer.
However, if you produce with a variant of the framework reserved `id` header - such as `ID`, `Id`, `iD` etc. then you will run into issues with the internals of the framework.
See this https://stackoverflow.com/questions/68412600/change-the-behaviour-in-spring-cloud-stream-make-header-matcher-case-sensitive[StackOverflow thread] fore more context on this use case.
In that case, you must use a custom `KafkaHeaderMapper` to map the case-sensitive id header.
For example, let's say you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("Id", "my-id").build();
}
```
The header `Id` above will be gone from the consuming side as it clashes with the framework `id` header.
You can provide a custom `KafkaHeaderMapper` to solve this issue.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper1() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String myId = (String) headers.get("Id");
target.add("Id", myId.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header Id = source.lastHeader("Id");
target.put("Id", new String(Id.value()));
}
};
}
```
By doing this, both `id` and `Id` headers will be available from the producer to the consumer side.
=== Producing to multiple topics in transaction
==== Problem Statement
How do I produce transactional messages to multiple Kafka topics?
For more context, see this https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow question].
==== Solution
Use transactional support in Kafka binder for transactions and then provide an `AfterRollbackProcessor`.
In order to produce to multiple topics, use `StreamBridge` API.
Below are the code snippets for this:
```
@Autowired
StreamBridge bridge;
@Bean
Consumer<String> input() {
return str -> {
System.out.println(str);
this.bridge.send("left", str.toUpperCase());
this.bridge.send("right", str.toLowerCase());
if (str.equals("Fail")) {
throw new RuntimeException("test");
}
};
}
@Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BinderFactory binders) {
return (container, dest, group) -> {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTemplate<byte[], byte[]> template = new KafkaTemplate<>(pf);
DefaultAfterRollbackProcessor rollbackProcessor = rollbackProcessor(template);
container.setAfterRollbackProcessor(rollbackProcessor);
};
}
DefaultAfterRollbackProcessor rollbackProcessor(KafkaTemplate<byte[], byte[]> template) {
return new DefaultAfterRollbackProcessor<>(
new DeadLetterPublishingRecoverer(template), new FixedBackOff(2000L, 2L), template, true);
}
```
==== Required Configuration
```
spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix: tx-
spring.cloud.stream.kafka.binder.required-acks=all
spring.cloud.stream.bindings.input-in-0.group=foo
spring.cloud.stream.bindings.input-in-0.destination=input
spring.cloud.stream.bindings.left.destination=left
spring.cloud.stream.bindings.right.destination=right
spring.cloud.stream.kafka.bindings.input-in-0.consumer.maxAttempts=1
```
in order to test, you can use the following:
```
@Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
System.in.read();
template.send("input", "Fail".getBytes());
template.send("input", "Good".getBytes());
};
}
```
Some important notes:
Please ensure that you don't have any DLQ settings on the application configuration as we manually configure DLT (By default it will be published to a topic named `input.DLT` based on the initial consumer function).
Also, reset the `maxAttempts` on consumer binding to `1` in order to avoid retries by the binder.
It will be max tried a total of 3 in the example above (initial try + the 2 attempts in the `FixedBackoff`).
See the https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow thread] for more details on how to test this code.
If you are using Spring Cloud Stream to test it by adding more consumer functions, make sure to set the `isolation-level` on the consumer binding to `read-committed`.
This https://stackoverflow.com/questions/68941306/spring-cloud-stream-database-transaction-does-not-roll-back[StackOverflow thread] is also related to this discussion.
=== Pitfalls to avoid when running multiple pollable consumers
==== Problem Statement
How can I run multiple instances of the pollable consumers and generate unique `client.id` for each instance?
==== Solution
Assuming that I have the following definition:
```
spring.cloud.stream.pollable-source: foo
spring.cloud.stream.bindings.foo-in-0.group: my-group
```
When running the application, the Kafka consumer generates a client.id (something like `consumer-my-group-1`).
For each instance of the application that is running, this `client.id` will be the same, causing unexpected issues.
In order to fix this, you can add the following property on each instance of the application:
```
spring.cloud.stream.kafka.bindings.foo-in-0.consumer.configuration.client.id=${client.id}
```
See this https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1139[GitHub issue] for more details.

36
mvnw vendored
View File

@@ -8,7 +8,7 @@
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
@@ -19,7 +19,7 @@
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven2 Start Up Batch script
# Maven Start Up Batch script
#
# Required ENV vars:
# ------------------
@@ -114,7 +114,6 @@ if $mingw ; then
M2_HOME="`(cd "$M2_HOME"; pwd)`"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
# TODO classpath?
fi
if [ -z "$JAVA_HOME" ]; then
@@ -212,7 +211,11 @@ else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
if [ -n "$MVNW_REPOURL" ]; then
jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
else
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
fi
while IFS="=" read key value; do
case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
esac
@@ -221,22 +224,38 @@ else
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if $cygwin; then
wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"`
fi
if command -v wget > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
wget "$jarUrl" -O "$wrapperJarPath"
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
wget "$jarUrl" -O "$wrapperJarPath"
else
wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath"
fi
elif command -v curl > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
curl -o "$wrapperJarPath" "$jarUrl"
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
curl -o "$wrapperJarPath" "$jarUrl" -f
else
curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
# For Cygwin, switch paths to Windows format before running javac
if $cygwin; then
javaClass=`cygpath --path --windows "$javaClass"`
fi
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
@@ -277,6 +296,11 @@ if $cygwin; then
MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
fi
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
exec "$JAVACMD" \

343
mvnw.cmd vendored Executable file → Normal file
View File

@@ -1,161 +1,182 @@
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven2 Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
FOR /F "tokens=1,2 delims==" %%A IN (%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties) DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
echo Found %WRAPPER_JAR%
) else (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
powershell -Command "(New-Object Net.WebClient).DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"
echo Finished downloading %WRAPPER_JAR%
)
@REM End of extension
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
if "%MVNW_VERBOSE%" == "true" (
echo Found %WRAPPER_JAR%
)
) else (
if not "%MVNW_REPOURL%" == "" (
SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
)
if "%MVNW_VERBOSE%" == "true" (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
)
powershell -Command "&{"^
"$webclient = new-object System.Net.WebClient;"^
"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
"}"^
"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
"}"
if "%MVNW_VERBOSE%" == "true" (
echo Finished downloading %WRAPPER_JAR%
)
)
@REM End of extension
@REM Provide a "standardized" way to retrieve the CLI args that will
@REM work with both Windows and non-Windows executions.
set MAVEN_CMD_LINE_ARGS=%*
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%

209
pom.xml
View File

@@ -2,20 +2,29 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M1</version>
<version>4.0.0-M1</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.2.0.M2</version>
<version>4.0.0-M1</version>
<relativePath />
</parent>
<scm>
<url>https://github.com/spring-cloud/spring-cloud-stream-binder-kafka</url>
<connection>scm:git:git://github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</connection>
<developerConnection>
scm:git:ssh://git@github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</developerConnection>
<tag>HEAD</tag>
</scm>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.5.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.RELEASE</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>3.0.0.M1</spring-cloud-stream.version>
<java.version>17</java.version>
<spring-kafka.version>3.0.0-M1</spring-kafka.version>
<spring-integration-kafka.version>6.0.0-M1</spring-integration-kafka.version>
<kafka.version>3.0.0</kafka.version>
<spring-cloud-stream.version>4.0.0-M1</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
@@ -26,7 +35,7 @@
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
<module>docs</module>
</modules>
</modules>
<dependencyManagement>
<dependencies>
@@ -50,13 +59,7 @@
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
@@ -92,7 +95,13 @@
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.13</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
@@ -112,21 +121,30 @@
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
<!-- <version>1.7</version>-->
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
@@ -145,6 +163,16 @@
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<compilerArgument>-parameters</compilerArgument>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
@@ -155,66 +183,91 @@
<profiles>
<profile>
<id>spring</id>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
<profile>
<id>coverage</id>
<activation>
<property>
<name>env.TRAVIS</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>
<executions>
<execution>
<id>agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</pluginRepository>
</pluginRepositories>
<reporting>
<plugins>
<plugin>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M1</version>
<version>4.0.0-M1</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -1 +0,0 @@
provides: spring-cloud-starter-stream-kafka

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M1</version>
<version>4.0.0-M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015-2018 the original author or authors.
* Copyright 2015-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,15 +16,21 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.time.Duration;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import jakarta.validation.constraints.AssertTrue;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -32,10 +38,11 @@ import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
import org.springframework.core.io.DefaultResourceLoader;
import org.springframework.core.io.Resource;
import org.springframework.expression.Expression;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
@@ -52,6 +59,8 @@ import org.springframework.util.StringUtils;
* @author Gary Russell
* @author Rafal Zukowski
* @author Aldo Sinanaj
* @author Lukasz Kaminski
* @author Chukwubuikem Ume-Ugwa
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
@@ -64,8 +73,6 @@ public class KafkaBinderConfigurationProperties {
private final KafkaProperties kafkaProperties;
private String[] zkNodes = new String[] { "localhost" };
/**
* Arbitrary kafka properties that apply to both producers and consumers.
*/
@@ -81,48 +88,26 @@ public class KafkaBinderConfigurationProperties {
*/
private Map<String, String> producerProperties = new HashMap<>();
private String defaultZkPort = "2181";
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
private String[] headers = new String[] {};
private int offsetUpdateTimeWindow = 10000;
private int offsetUpdateCount;
private int offsetUpdateShutdownTimeout = 2000;
private int maxWait = 100;
private boolean autoCreateTopics = true;
private boolean autoAlterTopics;
private boolean autoAddPartitions;
private int socketBufferSize = 2097152;
/**
* ZK session timeout in milliseconds.
*/
private int zkSessionTimeout = 10000;
/**
* ZK Connection timeout in milliseconds.
*/
private int zkConnectionTimeout = 10000;
private boolean considerDownWhenAnyPartitionHasNoLeader;
private String requiredAcks = "1";
private short replicationFactor = 1;
private int fetchSize = 1024 * 1024;
private short replicationFactor = -1;
private int minPartitionCount = 1;
private int queueSize = 8192;
/**
* Time to wait to get partition information in seconds; default 60.
*/
@@ -136,6 +121,21 @@ public class KafkaBinderConfigurationProperties {
*/
private String headerMapperBeanName;
/**
* Time between retries after AuthorizationException is caught in
* the ListenerContainer; defalt is null which disables retries.
* For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
*/
private Duration authorizationExceptionRetryInterval;
/**
* When a certificate store location is given as classpath URL (classpath:), then the binder
* moves the resource from the classpath location inside the JAR to a location on
* the filesystem. If this value is set, then this location is used, otherwise, the
* certificate file is copied to the directory returned by java.io.tmpdir.
*/
private String certificateStoreDirectory;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
this.kafkaProperties = kafkaProperties;
@@ -149,19 +149,81 @@ public class KafkaBinderConfigurationProperties {
return this.transaction;
}
/**
* No longer used.
* @return the connection String
* @deprecated connection to zookeeper is no longer necessary
*/
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@Deprecated
public String getZkConnectionString() {
return toConnectionString(this.zkNodes, this.defaultZkPort);
public String getKafkaConnectionString() {
// We need to do a check on certificate file locations to see if they are given as classpath resources.
// If that is the case, then we will move them to a file system location and use those as the certificate locations.
// This is due to a limitation in Kafka itself in which it doesn't allow reading certificate resources from the classpath.
// See this: https://issues.apache.org/jira/browse/KAFKA-7685
// and this: https://cwiki.apache.org/confluence/display/KAFKA/KIP-398%3A+Support+reading+trust+store+from+classpath
moveCertsToFileSystemIfNecessary();
return toConnectionString(this.brokers, this.defaultBrokerPort);
}
public String getKafkaConnectionString() {
return toConnectionString(this.brokers, this.defaultBrokerPort);
private void moveCertsToFileSystemIfNecessary() {
try {
moveBrokerCertsIfApplicable();
moveSchemaRegistryCertsIfApplicable();
}
catch (Exception e) {
throw new IllegalStateException(e);
}
}
private void moveBrokerCertsIfApplicable() throws IOException {
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.keystore.location", fileSystemLocation);
}
}
private void moveSchemaRegistryCertsIfApplicable() throws IOException {
String trustStoreLocation = this.configuration.get("schema.registry.ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("schema.registry.ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("schema.registry.ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("schema.registry.ssl.keystore.location", fileSystemLocation);
}
}
private String moveCertToFileSystem(String classpathLocation, String fileSystemLocation) throws IOException {
File targetFile;
final String tempDir = System.getProperty("java.io.tmpdir");
Resource resource = new DefaultResourceLoader().getResource(classpathLocation);
if (StringUtils.hasText(fileSystemLocation)) {
final Path path = Paths.get(fileSystemLocation);
if (!Files.exists(path) || !Files.isDirectory(path) || !Files.isWritable(path)) {
logger.warn("The filesystem location to move the cert files (" + fileSystemLocation + ") " +
"is not found or a directory that is writable. The system temp folder (java.io.tmpdir) will be used instead.");
targetFile = new File(Paths.get(tempDir, resource.getFilename()).toString());
}
else {
// the given location is verified to be a writable directory.
targetFile = new File(Paths.get(fileSystemLocation, resource.getFilename()).toString());
}
}
else {
targetFile = new File(Paths.get(tempDir, resource.getFilename()).toString());
}
try (InputStream inputStream = resource.getInputStream()) {
Files.copy(inputStream, targetFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
}
return targetFile.getAbsolutePath();
}
public String getDefaultKafkaConnectionString() {
@@ -172,72 +234,6 @@ public class KafkaBinderConfigurationProperties {
return this.headers;
}
/**
* No longer used.
* @return the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateTimeWindow() {
return this.offsetUpdateTimeWindow;
}
/**
* No longer used.
* @return the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateCount() {
return this.offsetUpdateCount;
}
/**
* No longer used.
* @return the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateShutdownTimeout() {
return this.offsetUpdateShutdownTimeout;
}
/**
* Zookeeper nodes.
* @return the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public String[] getZkNodes() {
return this.zkNodes;
}
/**
* Zookeeper nodes.
* @param zkNodes the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}
/**
* Zookeeper port.
* @param defaultZkPort the port.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setDefaultZkPort(String defaultZkPort) {
this.defaultZkPort = defaultZkPort;
}
public String[] getBrokers() {
return this.brokers;
}
@@ -254,83 +250,6 @@ public class KafkaBinderConfigurationProperties {
this.headers = headers;
}
/**
* No longer used.
* @param offsetUpdateTimeWindow the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateTimeWindow(int offsetUpdateTimeWindow) {
this.offsetUpdateTimeWindow = offsetUpdateTimeWindow;
}
/**
* No longer used.
* @param offsetUpdateCount the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateCount(int offsetUpdateCount) {
this.offsetUpdateCount = offsetUpdateCount;
}
/**
* No longer used.
* @param offsetUpdateShutdownTimeout the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateShutdownTimeout(int offsetUpdateShutdownTimeout) {
this.offsetUpdateShutdownTimeout = offsetUpdateShutdownTimeout;
}
/**
* Zookeeper session timeout.
* @return the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkSessionTimeout() {
return this.zkSessionTimeout;
}
/**
* Zookeeper session timeout.
* @param zkSessionTimeout the timout
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkSessionTimeout(int zkSessionTimeout) {
this.zkSessionTimeout = zkSessionTimeout;
}
/**
* Zookeeper connection timeout.
* @return the timout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkConnectionTimeout() {
return this.zkConnectionTimeout;
}
/**
* Zookeeper connection timeout.
* @param zkConnectionTimeout the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkConnectionTimeout(int zkConnectionTimeout) {
this.zkConnectionTimeout = zkConnectionTimeout;
}
/**
* Converts an array of host values to a comma-separated String. It will append the
* default port value, if not already specified.
@@ -351,28 +270,6 @@ public class KafkaBinderConfigurationProperties {
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
}
/**
* No longer used.
* @return the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getMaxWait() {
return this.maxWait;
}
/**
* No longer user.
* @param maxWait the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setMaxWait(int maxWait) {
this.maxWait = maxWait;
}
public String getRequiredAcks() {
return this.requiredAcks;
}
@@ -389,28 +286,6 @@ public class KafkaBinderConfigurationProperties {
this.replicationFactor = replicationFactor;
}
/**
* No longer used.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getFetchSize() {
return this.fetchSize;
}
/**
* No longer used.
* @param fetchSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setFetchSize(int fetchSize) {
this.fetchSize = fetchSize;
}
public int getMinPartitionCount() {
return this.minPartitionCount;
}
@@ -427,28 +302,6 @@ public class KafkaBinderConfigurationProperties {
this.healthTimeout = healthTimeout;
}
/**
* No longer used.
* @return the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getQueueSize() {
return this.queueSize;
}
/**
* No longer used.
* @param queueSize the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setQueueSize(int queueSize) {
this.queueSize = queueSize;
}
public boolean isAutoCreateTopics() {
return this.autoCreateTopics;
}
@@ -457,6 +310,14 @@ public class KafkaBinderConfigurationProperties {
this.autoCreateTopics = autoCreateTopics;
}
public boolean isAutoAlterTopics() {
return autoAlterTopics;
}
public void setAutoAlterTopics(boolean autoAlterTopics) {
this.autoAlterTopics = autoAlterTopics;
}
public boolean isAutoAddPartitions() {
return this.autoAddPartitions;
}
@@ -465,30 +326,6 @@ public class KafkaBinderConfigurationProperties {
this.autoAddPartitions = autoAddPartitions;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public int getSocketBufferSize() {
return this.socketBufferSize;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @param socketBufferSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public void setSocketBufferSize(int socketBufferSize) {
this.socketBufferSize = socketBufferSize;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -585,20 +422,10 @@ public class KafkaBinderConfigurationProperties {
private Map<String, Object> getConfigurationWithBootstrapServer(
Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = configuration.get(bootstrapServersConfig);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1
&& bootStrapServers.get(0).equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
final String kafkaConnectionString = getKafkaConnectionString();
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig)) ||
!kafkaConnectionString.equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, kafkaConnectionString);
}
return Collections.unmodifiableMap(configuration);
}
@@ -619,6 +446,30 @@ public class KafkaBinderConfigurationProperties {
this.headerMapperBeanName = headerMapperBeanName;
}
public Duration getAuthorizationExceptionRetryInterval() {
return authorizationExceptionRetryInterval;
}
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
this.authorizationExceptionRetryInterval = authorizationExceptionRetryInterval;
}
public boolean isConsiderDownWhenAnyPartitionHasNoLeader() {
return this.considerDownWhenAnyPartitionHasNoLeader;
}
public void setConsiderDownWhenAnyPartitionHasNoLeader(boolean considerDownWhenAnyPartitionHasNoLeader) {
this.considerDownWhenAnyPartitionHasNoLeader = considerDownWhenAnyPartitionHasNoLeader;
}
public String getCertificateStoreDirectory() {
return this.certificateStoreDirectory;
}
public void setCertificateStoreDirectory(String certificateStoreDirectory) {
this.certificateStoreDirectory = certificateStoreDirectory;
}
/**
* Domain class that models transaction capabilities in Kafka.
*/
@@ -800,16 +651,6 @@ public class KafkaBinderConfigurationProperties {
this.kafkaProducerProperties.setConfiguration(configuration);
}
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
}
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
}
public KafkaTopicProperties getTopic() {
return this.kafkaProducerProperties.getTopic();
}

View File

@@ -26,10 +26,20 @@ import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
*/
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider {
/**
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();
/**
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
private KafkaProducerProperties producer = new KafkaProducerProperties();
/**
* @return {@link KafkaConsumerProperties}
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
public KafkaConsumerProperties getConsumer() {
return this.consumer;
}
@@ -38,6 +48,10 @@ public class KafkaBindingProperties implements BinderSpecificPropertiesProvider
this.consumer = consumer;
}
/**
* @return {@link KafkaProducerProperties}
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
public KafkaProducerProperties getProducer() {
return this.producer;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2018 the original author or authors.
* Copyright 2016-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -19,7 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.kafka.listener.ContainerProperties;
/**
* Extended consumer properties for Kafka binder.
@@ -86,56 +86,199 @@ public class KafkaConsumerProperties {
}
/**
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
@Deprecated
private boolean ackEachRecord;
/**
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
private boolean autoRebalanceEnabled = true;
/**
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
@Deprecated
private boolean autoCommitOffset = true;
/**
* Controlling the container acknowledgement mode. This is the preferred way to control the ack mode on the
* container instead of the deprecated autoCommitOffset property.
*/
private ContainerProperties.AckMode ackMode;
/**
* Flag to enable auto commit on error in polled consumers.
*/
private Boolean autoCommitOnError;
/**
* The starting offset for new groups. Allowed values: earliest and latest.
*/
private StartOffset startOffset;
/**
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
private boolean resetOffsets;
/**
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
private boolean enableDlq;
/**
* The name of the DLQ topic to receive the error messages.
*/
private String dlqName;
/**
* Number of partitions to use on the DLQ.
*/
private Integer dlqPartitions;
/**
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
/**
* @deprecated No longer used by the binder.
*/
@Deprecated
private int recoveryInterval = 5000;
/**
* List of trusted packages to provide the header mapper.
*/
private String[] trustedPackages;
/**
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
private StandardHeaders standardHeaders = StandardHeaders.none;
/**
* The name of a bean that implements RecordMessageConverter.
*/
private String converterBeanName;
/**
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
private long idleEventInterval = 30_000;
/**
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
private boolean destinationIsPattern;
/**
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Timeout used for polling in pollable consumers.
*/
private long pollTimeout = org.springframework.kafka.listener.ConsumerProperties.DEFAULT_POLL_TIMEOUT;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* Set to false to NOT commit the offset of a successfully recovered recovered in the after rollback processor.
*/
private boolean txCommitRecovered = true;
/**
* CommonErrorHandler bean name per consumer binding.
* @since 3.2
*/
private String commonErrorHandlerBeanName;
/**
* @return if each record needs to be acknowledged.
*
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*
* @deprecated since 3.1 in favor of using {@link #ackMode}
*/
@Deprecated
public boolean isAckEachRecord() {
return this.ackEachRecord;
}
/**
* @param ackEachRecord
*
* @deprecated in favor of using {@link #ackMode}
*/
@Deprecated
public void setAckEachRecord(boolean ackEachRecord) {
this.ackEachRecord = ackEachRecord;
}
/**
* @return is autocommit offset enabled
*
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*
* @deprecated since 3.1 in favor of using {@link #ackMode}
*/
@Deprecated
public boolean isAutoCommitOffset() {
return this.autoCommitOffset;
}
/**
* @param autoCommitOffset
*
* @deprecated in favor of using {@link #ackMode}
*/
@Deprecated
public void setAutoCommitOffset(boolean autoCommitOffset) {
this.autoCommitOffset = autoCommitOffset;
}
/**
* @return Container's ack mode.
*/
public ContainerProperties.AckMode getAckMode() {
return this.ackMode;
}
public void setAckMode(ContainerProperties.AckMode ackMode) {
this.ackMode = ackMode;
}
/**
* @return start offset
*
* The starting offset for new groups. Allowed values: earliest and latest.
*/
public StartOffset getStartOffset() {
return this.startOffset;
}
@@ -144,6 +287,12 @@ public class KafkaConsumerProperties {
this.startOffset = startOffset;
}
/**
* @return if resetting offset is enabled
*
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
public boolean isResetOffsets() {
return this.resetOffsets;
}
@@ -152,6 +301,13 @@ public class KafkaConsumerProperties {
this.resetOffsets = resetOffsets;
}
/**
* @return is DLQ enabled.
*
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
public boolean isEnableDlq() {
return this.enableDlq;
}
@@ -160,10 +316,20 @@ public class KafkaConsumerProperties {
this.enableDlq = enableDlq;
}
/**
* @return is autocommit on error in polled consumers.
*
* This property accessor is only used in polled consumers.
*/
public Boolean getAutoCommitOnError() {
return this.autoCommitOnError;
}
/**
*
* @param autoCommitOnError commit on error in polled consumers.
*
*/
public void setAutoCommitOnError(Boolean autoCommitOnError) {
this.autoCommitOnError = autoCommitOnError;
}
@@ -188,6 +354,12 @@ public class KafkaConsumerProperties {
this.recoveryInterval = recoveryInterval;
}
/**
* @return is auto rebalance enabled
*
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
public boolean isAutoRebalanceEnabled() {
return this.autoRebalanceEnabled;
}
@@ -196,6 +368,12 @@ public class KafkaConsumerProperties {
this.autoRebalanceEnabled = autoRebalanceEnabled;
}
/**
* @return a map of configuration
*
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -204,6 +382,11 @@ public class KafkaConsumerProperties {
this.configuration = configuration;
}
/**
* @return dlq name
*
* The name of the DLQ topic to receive the error messages.
*/
public String getDlqName() {
return this.dlqName;
}
@@ -212,6 +395,24 @@ public class KafkaConsumerProperties {
this.dlqName = dlqName;
}
/**
* @return number of partitions on the DLQ topic
*
* Number of partitions to use on the DLQ.
*/
public Integer getDlqPartitions() {
return this.dlqPartitions;
}
public void setDlqPartitions(Integer dlqPartitions) {
this.dlqPartitions = dlqPartitions;
}
/**
* @return trusted packages
*
* List of trusted packages to provide the header mapper.
*/
public String[] getTrustedPackages() {
return this.trustedPackages;
}
@@ -220,6 +421,12 @@ public class KafkaConsumerProperties {
this.trustedPackages = trustedPackages;
}
/**
* @return dlq producer properties
*
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
public KafkaProducerProperties getDlqProducerProperties() {
return this.dlqProducerProperties;
}
@@ -228,6 +435,12 @@ public class KafkaConsumerProperties {
this.dlqProducerProperties = dlqProducerProperties;
}
/**
* @return standard headers
*
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
@@ -236,6 +449,11 @@ public class KafkaConsumerProperties {
this.standardHeaders = standardHeaders;
}
/**
* @return converter bean name
*
* The name of a bean that implements RecordMessageConverter.
*/
public String getConverterBeanName() {
return this.converterBeanName;
}
@@ -244,6 +462,11 @@ public class KafkaConsumerProperties {
this.converterBeanName = converterBeanName;
}
/**
* @return idle event interval
*
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
public long getIdleEventInterval() {
return this.idleEventInterval;
}
@@ -252,6 +475,11 @@ public class KafkaConsumerProperties {
this.idleEventInterval = idleEventInterval;
}
/**
* @return is destination given through a pattern
*
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
public boolean isDestinationIsPattern() {
return this.destinationIsPattern;
}
@@ -261,28 +489,10 @@ public class KafkaConsumerProperties {
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
@@ -291,4 +501,45 @@ public class KafkaConsumerProperties {
this.topic = topic;
}
/**
* @return timeout in pollable consumers
*
* Timeout used for polling in pollable consumers.
*/
public long getPollTimeout() {
return this.pollTimeout;
}
public void setPollTimeout(long pollTimeout) {
this.pollTimeout = pollTimeout;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
public boolean isTxCommitRecovered() {
return this.txCommitRecovered;
}
public void setTxCommitRecovered(boolean txCommitRecovered) {
this.txCommitRecovered = txCommitRecovered;
}
public String getCommonErrorHandlerBeanName() {
return commonErrorHandlerBeanName;
}
public void setCommonErrorHandlerBeanName(String commonErrorHandlerBeanName) {
this.commonErrorHandlerBeanName = commonErrorHandlerBeanName;
}
}

View File

@@ -19,9 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import javax.validation.constraints.NotNull;
import jakarta.validation.constraints.NotNull;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.expression.Expression;
/**
@@ -34,22 +33,88 @@ import org.springframework.expression.Expression;
*/
public class KafkaProducerProperties {
/**
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
private int bufferSize = 16384;
/**
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
private CompressionType compressionType = CompressionType.none;
/**
* Whether the producer is synchronous.
*/
private boolean sync;
/**
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
private Expression sendTimeoutExpression;
/**
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
private int batchTimeout;
/**
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
private Expression messageKeyExpression;
/**
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
private String[] headerPatterns;
/**
* Map with a key/value pair containing generic Kafka producer properties.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
private boolean useTopicHeader;
/**
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
private String recordMetadataChannel;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/*
* Timeout value in seconds for the duration to wait when closing the producer.
* If not set this defaults to 30 seconds.
*/
private int closeTimeout;
/**
* Set to true to disable transactions.
*/
private boolean allowNonTransactional;
/**
* @return buffer size
*
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
public int getBufferSize() {
return this.bufferSize;
}
@@ -58,6 +123,12 @@ public class KafkaProducerProperties {
this.bufferSize = bufferSize;
}
/**
* @return compression type {@link CompressionType}
*
* Set the compression.type producer property. Supported values are none, gzip, snappy, lz4 and zstd.
* See {@link CompressionType} for more details.
*/
@NotNull
public CompressionType getCompressionType() {
return this.compressionType;
@@ -67,6 +138,11 @@ public class KafkaProducerProperties {
this.compressionType = compressionType;
}
/**
* @return if synchronous sending is enabled
*
* Whether the producer is synchronous.
*/
public boolean isSync() {
return this.sync;
}
@@ -75,6 +151,25 @@ public class KafkaProducerProperties {
this.sync = sync;
}
/**
* @return timeout expression for send
*
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
public Expression getSendTimeoutExpression() {
return this.sendTimeoutExpression;
}
public void setSendTimeoutExpression(Expression sendTimeoutExpression) {
this.sendTimeoutExpression = sendTimeoutExpression;
}
/**
* @return batch timeout
*
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
public int getBatchTimeout() {
return this.batchTimeout;
}
@@ -83,6 +178,11 @@ public class KafkaProducerProperties {
this.batchTimeout = batchTimeout;
}
/**
* @return message key expression
*
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
public Expression getMessageKeyExpression() {
return this.messageKeyExpression;
}
@@ -91,6 +191,12 @@ public class KafkaProducerProperties {
this.messageKeyExpression = messageKeyExpression;
}
/**
* @return header patterns
*
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
public String[] getHeaderPatterns() {
return this.headerPatterns;
}
@@ -99,6 +205,11 @@ public class KafkaProducerProperties {
this.headerPatterns = headerPatterns;
}
/**
* @return map of configuration
*
* Map with a key/value pair containing generic Kafka producer properties.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -108,28 +219,10 @@ public class KafkaProducerProperties {
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
@@ -138,6 +231,67 @@ public class KafkaProducerProperties {
this.topic = topic;
}
/**
* @return if using topic header
*
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
public boolean isUseTopicHeader() {
return this.useTopicHeader;
}
public void setUseTopicHeader(boolean useTopicHeader) {
this.useTopicHeader = useTopicHeader;
}
/**
* @return record metadata channel
*
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
public String getRecordMetadataChannel() {
return this.recordMetadataChannel;
}
public void setRecordMetadataChannel(String recordMetadataChannel) {
this.recordMetadataChannel = recordMetadataChannel;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
/*
* @return timeout in seconds for closing the producer
*/
public int getCloseTimeout() {
return this.closeTimeout;
}
public void setCloseTimeout(int closeTimeout) {
this.closeTimeout = closeTimeout;
}
public boolean isAllowNonTransactional() {
return this.allowNonTransactional;
}
public void setAllowNonTransactional(boolean allowNonTransactional) {
this.allowNonTransactional = allowNonTransactional;
}
/**
* Enumeration for compression types.
*/
@@ -163,11 +317,10 @@ public class KafkaProducerProperties {
*/
lz4,
// /** // TODO: uncomment and fix docs when kafka-clients 2.1.0 or newer is the
// default
// * zstd compression
// */
// zstd
/**
* zstd compression.
*/
zstd,
}

View File

@@ -0,0 +1,31 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Map;
/**
* Customizer for configuring AdminClient.
*
* @author Soby Chacko
* @since 3.1.2
*/
@FunctionalInterface
public interface AdminClientConfigCustomizer {
void configure(Map<String, Object> adminClientProperties);
}

View File

@@ -18,20 +18,27 @@ package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.AlterConfigOp;
import org.apache.kafka.clients.admin.AlterConfigsResult;
import org.apache.kafka.clients.admin.Config;
import org.apache.kafka.clients.admin.ConfigEntry;
import org.apache.kafka.clients.admin.CreatePartitionsResult;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.DescribeConfigsResult;
import org.apache.kafka.clients.admin.DescribeTopicsResult;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.clients.admin.NewPartitions;
@@ -39,6 +46,7 @@ import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.config.ConfigResource;
import org.apache.kafka.common.errors.TopicExistsException;
import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
@@ -81,9 +89,9 @@ public class KafkaTopicProvisioner implements
// @checkstyle:on
InitializingBean {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private static final Log logger = LogFactory.getLog(KafkaTopicProvisioner.class);
private final Log logger = LogFactory.getLog(getClass());
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private final KafkaBinderConfigurationProperties configurationProperties;
@@ -93,14 +101,27 @@ public class KafkaTopicProvisioner implements
private RetryOperations metadataRetryOperations;
/**
* Create an instance.
* @param kafkaBinderConfigurationProperties the binder configuration properties.
* @param kafkaProperties the boot Kafka properties used to build the
* @param adminClientConfigCustomizer to customize {@link AdminClient}.
* {@link AdminClient}.
*/
public KafkaTopicProvisioner(
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
KafkaProperties kafkaProperties,
AdminClientConfigCustomizer adminClientConfigCustomizer) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
this.adminClientProperties = kafkaProperties.buildAdminProperties();
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaBinderConfigurationProperties);
// If the application provides an AdminConfig customizer
// and overrides properties, that takes precedence.
if (adminClientConfigCustomizer != null) {
adminClientConfigCustomizer.configure(this.adminClientProperties);
}
}
/**
@@ -112,7 +133,7 @@ public class KafkaTopicProvisioner implements
}
@Override
public void afterPropertiesSet() throws Exception {
public void afterPropertiesSet() {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
@@ -133,29 +154,35 @@ public class KafkaTopicProvisioner implements
public ProducerDestination provisionProducerDestination(final String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
if (logger.isInfoEnabled()) {
logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
try (AdminClient adminClient = createAdminClient()) {
createTopic(adminClient, name, properties.getPartitionCount(), false,
properties.getExtension().getTopic());
int partitions = 0;
Map<String, TopicDescription> topicDescriptions = new HashMap<>();
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = null;
try {
topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception ex) {
throw new ProvisioningException(
"Problems encountered with partitions finding", ex);
}
TopicDescription topicDescription = topicDescriptions.get(name);
this.metadataRetryOperations.execute(context -> {
try {
if (logger.isDebugEnabled()) {
logger.debug("Attempting to retrieve the description for the topic: " + name);
}
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
topicDescriptions.putAll(all.get(this.operationTimeout, TimeUnit.SECONDS));
}
catch (Exception ex) {
throw new ProvisioningException("Problems encountered with partitions finding for: " + name, ex);
}
return null;
});
}
TopicDescription topicDescription = topicDescriptions.get(name);
if (topicDescription != null) {
partitions = topicDescription.partitions().size();
}
return new KafkaProducerDestination(name, partitions);
@@ -185,8 +212,8 @@ public class KafkaTopicProvisioner implements
if (properties.getExtension().isDestinationIsPattern()) {
Assert.isTrue(!properties.getExtension().isEnableDlq(),
"enableDLQ is not allowed when listening to topic patterns");
if (this.logger.isDebugEnabled()) {
this.logger.debug("Listening to a topic pattern - " + name
if (logger.isDebugEnabled()) {
logger.debug("Listening to a topic pattern - " + name
+ " - no provisioning performed");
}
return new KafkaConsumerDestination(name);
@@ -222,7 +249,7 @@ public class KafkaTopicProvisioner implements
}
}
catch (Exception ex) {
throw new ProvisioningException("provisioning exception", ex);
throw new ProvisioningException("Provisioning exception encountered for " + name, ex);
}
}
}
@@ -242,14 +269,14 @@ public class KafkaTopicProvisioner implements
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
public static void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
if (ObjectUtils
.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString
.equals(binderProps.getDefaultKafkaConnectionString())) {
.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
@@ -263,8 +290,8 @@ public class KafkaTopicProvisioner implements
}
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && this.logger.isDebugEnabled()) {
this.logger.debug("Overrode boot property: [" + key + "], from: ["
if (replaced != null && KafkaTopicProvisioner.logger.isDebugEnabled()) {
KafkaTopicProvisioner.logger.debug("Overrode boot property: [" + key + "], from: ["
+ replaced + "] to: [" + value + "]");
}
}
@@ -274,21 +301,26 @@ public class KafkaTopicProvisioner implements
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
? properties.getExtension().getDlqName()
: "error." + name + "." + group;
int dlqPartitions = properties.getExtension().getDlqPartitions() == null
? partitions
: properties.getExtension().getDlqPartitions();
try {
createTopicAndPartitions(adminClient, dlqTopic, partitions,
final KafkaProducerProperties dlqProducerProperties = properties.getExtension().getDlqProducerProperties();
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
dlqProducerProperties.getTopic());
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
throw new ProvisioningException("Provisioning exception encountered for " + name, throwable);
}
}
return new KafkaConsumerDestination(name, partitions, dlqTopic);
@@ -312,7 +344,7 @@ public class KafkaTopicProvisioner implements
else {
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
throw new ProvisioningException("Provisioning exception", throwable);
throw new ProvisioningException("Provisioning exception encountered for " + name, throwable);
}
}
}
@@ -326,7 +358,7 @@ public class KafkaTopicProvisioner implements
tolerateLowerPartitionsOnBroker, properties);
}
else {
this.logger.info("Auto creation of topics is disabled.");
logger.info("Auto creation of topics is disabled.");
}
}
@@ -350,13 +382,17 @@ public class KafkaTopicProvisioner implements
Set<String> names = namesFutures.get(this.operationTimeout, TimeUnit.SECONDS);
if (names.contains(topicName)) {
//check if topic.properties are different from Topic Configuration in Kafka
if (this.configurationProperties.isAutoAlterTopics()) {
alterTopicConfigsIfNecessary(adminClient, topicName, topicProperties);
}
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties
.isAutoAddPartitions()
? Math.max(
this.configurationProperties.getMinPartitionCount(),
partitionCount)
: partitionCount;
? Math.max(
this.configurationProperties.getMinPartitionCount(),
partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult
@@ -373,7 +409,7 @@ public class KafkaTopicProvisioner implements
partitions.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: "
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
@@ -409,7 +445,7 @@ public class KafkaTopicProvisioner implements
topicProperties.getReplicationFactor() != null
? topicProperties.getReplicationFactor()
: this.configurationProperties
.getReplicationFactor());
.getReplicationFactor());
}
if (topicProperties.getProperties().size() > 0) {
newTopic.configs(topicProperties.getProperties());
@@ -422,18 +458,18 @@ public class KafkaTopicProvisioner implements
catch (Exception ex) {
if (ex instanceof ExecutionException) {
if (ex.getCause() instanceof TopicExistsException) {
if (this.logger.isWarnEnabled()) {
this.logger.warn("Attempt to create topic: " + topicName
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName
+ ". Topic already exists.");
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
@@ -442,6 +478,51 @@ public class KafkaTopicProvisioner implements
}
}
private void alterTopicConfigsIfNecessary(AdminClient adminClient,
String topicName,
KafkaTopicProperties topicProperties)
throws InterruptedException, ExecutionException, java.util.concurrent.TimeoutException {
ConfigResource topicConfigResource = new ConfigResource(ConfigResource.Type.TOPIC, topicName);
DescribeConfigsResult describeConfigsResult = adminClient
.describeConfigs(Collections.singletonList(topicConfigResource));
KafkaFuture<Map<ConfigResource, Config>> topicConfigurationFuture = describeConfigsResult.all();
Map<ConfigResource, Config> topicConfigMap = topicConfigurationFuture
.get(this.operationTimeout, TimeUnit.SECONDS);
Config config = topicConfigMap.get(topicConfigResource);
final List<AlterConfigOp> updatedConfigEntries = topicProperties.getProperties().entrySet().stream()
.filter(propertiesEntry -> {
// Property is new and should be added
if (config.get(propertiesEntry.getKey()) == null) {
return true;
}
else {
// Property changed and should be updated
return !config.get(propertiesEntry.getKey()).value().equals(propertiesEntry.getValue());
}
})
.map(propertyEntry -> new ConfigEntry(propertyEntry.getKey(), propertyEntry.getValue()))
.map(configEntry -> new AlterConfigOp(configEntry, AlterConfigOp.OpType.SET))
.collect(Collectors.toList());
if (!updatedConfigEntries.isEmpty()) {
if (logger.isDebugEnabled()) {
logger.debug("Attempting to alter configs " + updatedConfigEntries + " for the topic:" + topicName);
}
Map<ConfigResource, Collection<AlterConfigOp>> alterConfigForTopics = new HashMap<>();
alterConfigForTopics.put(topicConfigResource, updatedConfigEntries);
AlterConfigsResult alterConfigsResult = adminClient.incrementalAlterConfigs(alterConfigForTopics);
alterConfigsResult.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
}
/**
* Check that the topic has the expected number of partitions and return the partition information.
* @param partitionCount the expected count.
* @param tolerateLowerPartitionsOnBroker if false, throw an exception if there are not enough partitions.
* @param callable a Callable that will provide the partition information.
* @param topicName the topic./
* @return the partition information.
*/
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable, final String topicName) {
@@ -461,7 +542,7 @@ public class KafkaTopicProvisioner implements
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
this.logger.error("Failed to obtain partition information", ex);
logger.error("Failed to obtain partition information", ex);
}
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
@@ -469,7 +550,7 @@ public class KafkaTopicProvisioner implements
try (AdminClient adminClient = AdminClient
.create(this.adminClientProperties)) {
final DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
.describeTopics(Collections.singletonList(topicName));
describeTopicsResult.all().get();
}
@@ -488,7 +569,7 @@ public class KafkaTopicProvisioner implements
int partitionSize = CollectionUtils.isEmpty(partitions) ? 0 : partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: "
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
@@ -506,7 +587,7 @@ public class KafkaTopicProvisioner implements
});
}
catch (Exception ex) {
this.logger.error("Cannot initialize Binder", ex);
logger.error("Cannot initialize Binder", ex);
throw new BinderException("Cannot initialize binder:", ex);
}
}

View File

@@ -0,0 +1,35 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import java.util.function.BiFunction;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* A {@link BiFunction} extension for defining DLQ destination resolvers.
*
* The BiFunction takes the {@link ConsumerRecord} and the exception as inputs
* and returns a topic name as the DLQ.
*
* @author Soby Chacko
* @since 3.0.9
*/
@FunctionalInterface
public interface DlqDestinationResolver extends BiFunction<ConsumerRecord<?, ?>, Exception, String> {
}

View File

@@ -0,0 +1,76 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import org.apache.commons.logging.Log;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.lang.Nullable;
/**
* A TriFunction that takes a consumer group, consumer record, and throwable and returns
* which partition to publish to the dead letter topic. Returning {@code null} means Kafka
* will choose the partition.
*
* @author Gary Russell
* @since 3.0
*
*/
@FunctionalInterface
public interface DlqPartitionFunction {
/**
* Returns the same partition as the original recor.
*/
DlqPartitionFunction ORIGINAL_PARTITION = (group, rec, ex) -> rec.partition();
/**
* Returns 0.
*/
DlqPartitionFunction PARTITION_ZERO = (group, rec, ex) -> 0;
/**
* Apply the function.
* @param group the consumer group.
* @param record the consumer record.
* @param throwable the exception.
* @return the DLQ partition, or null.
*/
@Nullable
Integer apply(String group, ConsumerRecord<?, ?> record, Throwable throwable);
/**
* Determine the fallback function to use based on the dlq partition count if no
* {@link DlqPartitionFunction} bean is provided.
* @param dlqPartitions the partition count.
* @param logger the logger.
* @return the fallback.
*/
static DlqPartitionFunction determineFallbackFunction(@Nullable Integer dlqPartitions, Log logger) {
if (dlqPartitions == null) {
return ORIGINAL_PARTITION;
}
else if (dlqPartitions > 1) {
logger.error("'dlqPartitions' is > 1 but a custom DlqPartitionFunction bean is not provided");
return ORIGINAL_PARTITION;
}
else {
return PARTITION_ZERO;
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,10 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.assertj.core.util.Files;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
@@ -105,4 +107,57 @@ public class KafkaBinderConfigurationPropertiesTest {
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void testCertificateFilesAreConvertedToAbsolutePathsFromClassPathResources() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("ssl.truststore.location"))
.isEqualTo(Paths.get(System.getProperty("java.io.tmpdir"), "testclient.truststore").toString());
assertThat(configuration.get("ssl.keystore.location"))
.isEqualTo(Paths.get(System.getProperty("java.io.tmpdir"), "testclient.keystore").toString());
}
@Test
public void testCertificateFilesAreConvertedToGivenAbsolutePathsFromClassPathResources() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.setCertificateStoreDirectory("target");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("ssl.truststore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.truststore").toString());
assertThat(configuration.get("ssl.keystore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
}
@Test
public void testCertificateFilesAreMovedForSchemaRegistryConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("schema.registry.ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("schema.registry.ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.setCertificateStoreDirectory("target");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("schema.registry.ssl.truststore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.truststore").toString());
assertThat(configuration.get("schema.registry.ssl.keystore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
}
}

View File

@@ -42,6 +42,8 @@ import static org.assertj.core.api.Assertions.fail;
*/
public class KafkaTopicProvisionerTests {
AdminClientConfigCustomizer adminClientConfigCustomizer = adminClientProperties -> adminClientProperties.put("foo", "bar");
@SuppressWarnings("rawtypes")
@Test
public void bootPropertiesOverriddenExceptServers() throws Exception {
@@ -58,7 +60,7 @@ public class KafkaTopicProvisionerTests {
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:9092");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
bootConfig, adminClientConfigCustomizer);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
@@ -67,6 +69,7 @@ public class KafkaTopicProvisionerTests {
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
assertThat(configs.get("foo")).isEqualTo("bar");
adminClient.close();
}
@@ -86,7 +89,7 @@ public class KafkaTopicProvisionerTests {
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:1234");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
bootConfig, adminClientConfigCustomizer);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
@@ -106,7 +109,7 @@ public class KafkaTopicProvisionerTests {
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
"localhost:1234");
try {
new KafkaTopicProvisioner(binderConfig, bootConfig);
new KafkaTopicProvisioner(binderConfig, bootConfig, adminClientConfigCustomizer);
fail("Expected illegal state");
}
catch (IllegalStateException e) {

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M1</version>
<version>4.0.0-M1</version>
</parent>
<properties>
@@ -54,13 +54,6 @@
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
</dependency>
<!-- Added back since Kafka still depends on it, but it has been removed by Boot due to EOL -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
@@ -71,54 +64,15 @@
<artifactId>spring-boot-autoconfigure-processor</artifactId>
<optional>true</optional>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
<artifactId>kafka_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
<version>${avro.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<executions>
<execution>
<phase>generate-test-sources</phase>
<goals>
<goal>schema</goal>
</goals>
<configuration>
<outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>
<testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>
<testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
</project>

View File

@@ -0,0 +1,634 @@
/*
* Copyright 2019-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;
import java.util.regex.Pattern;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.processor.TimestampExtractor;
import org.apache.kafka.streams.processor.api.Processor;
import org.apache.kafka.streams.processor.api.Record;
import org.apache.kafka.streams.processor.api.RecordMetadata;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 3.0.0
*/
public abstract class AbstractKafkaStreamsBinderProcessor implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(AbstractKafkaStreamsBinderProcessor.class);
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final CleanupConfig cleanupConfig;
private final KeyValueSerdeResolver keyValueSerdeResolver;
protected ConfigurableApplicationContext applicationContext;
public AbstractKafkaStreamsBinderProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.cleanupConfig = cleanupConfig;
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected Topology.AutoOffsetReset getAutoOffsetReset(String inboundName, KafkaStreamsConsumerProperties extendedConsumerProperties) {
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
AbstractKafkaStreamsBinderProcessor.LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
return autoOffsetReset;
}
@SuppressWarnings("unchecked")
protected void handleKTableGlobalKTableInputs(Object[] arguments, int index, String input, Class<?> parameterType, Object targetBean,
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
}
@SuppressWarnings({ "unchecked" })
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
ApplicationContext applicationContext, String inboundName,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanConfigurer customizer,
ConfigurableEnvironment environment, BindingProperties bindingProperties) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
// Use a copy because the global configuration will be shared by multiple processors.
Map<String, Object> streamConfiguration = new HashMap<>(streamConfigGlobalProperties);
if (kafkaStreamsBinderConfigurationProperties != null) {
final Map<String, KafkaStreamsBinderConfigurationProperties.Functions> functionConfigMap = kafkaStreamsBinderConfigurationProperties.getFunctions();
if (!CollectionUtils.isEmpty(functionConfigMap)) {
final KafkaStreamsBinderConfigurationProperties.Functions functionConfig = functionConfigMap.get(beanNamePostPrefix);
if (functionConfig != null) {
final Map<String, String> functionSpecificConfig = functionConfig.getConfiguration();
if (!CollectionUtils.isEmpty(functionSpecificConfig)) {
streamConfiguration.putAll(functionSpecificConfig);
}
String applicationId = functionConfig.getApplicationId();
if (!StringUtils.isEmpty(applicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
}
}
}
final MutablePropertySources propertySources = environment.getPropertySources();
if (!StringUtils.isEmpty(bindingProperties.getBinder())) {
final KafkaStreamsBinderConfigurationProperties multiBinderKafkaStreamsBinderConfigurationProperties =
applicationContext.getBean(bindingProperties.getBinder() + "-KafkaStreamsBinderConfigurationProperties", KafkaStreamsBinderConfigurationProperties.class);
String connectionString = multiBinderKafkaStreamsBinderConfigurationProperties.getKafkaConnectionString();
if (StringUtils.isEmpty(connectionString)) {
connectionString = (String) propertySources.get(bindingProperties.getBinder() + "-kafkaStreamsBinderEnv").getProperty("spring.cloud.stream.kafka.binder.brokers");
}
streamConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, connectionString);
String binderProvidedApplicationId = multiBinderKafkaStreamsBinderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
SendToDlqAndContinue sendToDlqAndContinue = applicationContext.getBean(SendToDlqAndContinue.class);
streamConfiguration.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration())) {
streamConfiguration.putAll(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration());
}
if (!streamConfiguration.containsKey(StreamsConfig.REPLICATION_FACTOR_CONFIG)) {
streamConfiguration.put(StreamsConfig.REPLICATION_FACTOR_CONFIG,
(int) multiBinderKafkaStreamsBinderConfigurationProperties.getReplicationFactor());
}
}
//this is only used primarily for StreamListener based processors. Although in theory, functions can use it,
//it is ideal for functions to use the approach used in the above if statement by using a property like
//spring.cloud.stream.kafka.streams.binder.functions.process.configuration.num.threads (assuming that process is the function name).
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
Map<String, String> bindingConfig = extendedConsumerProperties.getConfiguration();
Assert.state(!bindingConfig.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG),
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
// We will only add the per binding configuration to the current streamConfiguration and not the global one.
streamConfiguration
.putAll(bindingConfig);
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
// We provide this for backward compatibility with StreamListener based processors.
// For function based processors see the approach used above
// (i.e. use a property like spring.cloud.stream.kafka.streams.binder.functions.process.applicationId).
if (StringUtils.hasText(bindingLevelApplicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
bindingLevelApplicationId);
}
//If the application id is not set by any mechanism, then generate it.
streamConfiguration.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
k -> {
String generatedApplicationID = beanNamePostPrefix + "-applicationId";
LOG.info("Binder Generated Kafka Streams Application ID: " + generatedApplicationID);
LOG.info("Use the binder generated application ID only for development and testing. ");
LOG.info("For production deployments, please consider explicitly setting an application ID using a configuration property.");
LOG.info("The generated applicationID is static and will be preserved over application restarts.");
return generatedApplicationID;
});
handleConcurrency(applicationContext, inboundName, streamConfiguration);
// Override deserialization exception handlers per binding
final DeserializationExceptionHandler deserializationExceptionHandler =
extendedConsumerProperties.getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndFail) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndContinue) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
streamConfiguration.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER,
applicationContext.getBean(SendToDlqAndContinue.class));
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.skipAndContinue) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SkipAndContinueExceptionHandler.class);
}
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfiguration);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilderFactoryBean.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilderFactoryBean.getClass(),
() -> streamsBuilderFactoryBean)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
extendedConsumerProperties.setApplicationId((String) streamConfiguration.get(StreamsConfig.APPLICATION_ID_CONFIG));
final StreamsBuilderFactoryBean streamsBuilderFactoryBeanFromContext = applicationContext.getBean(
"&stream-builder-" + beanNamePostPrefix, StreamsBuilderFactoryBean.class);
//At this point, the StreamsBuilderFactoryBean is created. If the users call, getObject()
//in the customizer, that should grant access to the StreamsBuilder.
if (customizer != null) {
customizer.configure(streamsBuilderFactoryBean);
}
return streamsBuilderFactoryBeanFromContext;
}
private void handleConcurrency(ApplicationContext applicationContext, String inboundName,
Map<String, Object> streamConfiguration) {
// This rebinding is necessary to capture the concurrency explicitly set by the application.
// This is added to fix this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/899
org.springframework.boot.context.properties.bind.Binder explicitConcurrencyResolver =
new org.springframework.boot.context.properties.bind.Binder(ConfigurationPropertySources.get(applicationContext.getEnvironment()),
new PropertySourcesPlaceholdersResolver(applicationContext.getEnvironment()),
IntegrationUtils.getConversionService(this.applicationContext.getBeanFactory()), null);
boolean[] concurrencyExplicitlyProvided = new boolean[] {false};
BindHandler handler = new BindHandler() {
@Override
public Object onSuccess(ConfigurationPropertyName name, Bindable<?> target,
BindContext context, Object result) {
if (!concurrencyExplicitlyProvided[0]) {
concurrencyExplicitlyProvided[0] = name.getLastElement(ConfigurationPropertyName.Form.UNIFORM)
.equals("concurrency") &&
ConfigurationPropertyName.of("spring.cloud.stream.bindings." + inboundName + ".consumer").isAncestorOf(name);
}
return result;
}
};
//Re-bind spring.cloud.stream properties to check if the application explicitly provided concurrency.
try {
explicitConcurrencyResolver.bind("spring.cloud.stream",
Bindable.ofInstance(new BindingServiceProperties()), handler);
}
catch (Exception e) {
// Ignore this exception
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
// Concurrency will be mapped to num.stream.threads.
// This conditional also takes into account explicit concurrency settings left at the default value of 1
// by the application to address concurrency behavior in applications with multiple processors.
// See this GH issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844
if (concurrency >= 1 && concurrencyExplicitlyProvided[0]) {
streamConfiguration.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
}
protected Serde<?> getValueSerde(String inboundName, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, ResolvableType resolvableType) {
if (bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding()) {
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
return this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), kafkaStreamsConsumerProperties, resolvableType);
}
else {
return Serdes.ByteArray();
}
}
@SuppressWarnings({"rawtypes", "unchecked"})
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
KStream<?, ?> stream;
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
new Serdes.BytesSerde() : valueSerde;
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
if (this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName).isDestinationIsPattern()) {
final Pattern pattern = Pattern.compile(this.bindingServiceProperties.getBindingDestination(inboundName));
stream = streamsBuilder.stream(pattern, consumed);
}
else {
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
consumed);
}
//Check to see if event type based routing is enabled.
//See this issue for more context: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
AtomicBoolean matched = new AtomicBoolean();
AtomicReference<String> topicObject = new AtomicReference<>();
AtomicReference<Headers> headersObject = new AtomicReference<>();
// Processor to retrieve the header value.
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched, topicObject, headersObject));
// Branching based on event type match.
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
// Deserialize if we have a branch from above.
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(
topicObject.get(), headersObject.get(), ((Bytes) value).get()));
return getkStream(bindingProperties, deserializedKStream, nativeDecoding);
}
return getkStream(bindingProperties, stream, nativeDecoding);
}
private KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
if (!nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType)) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
}
return stream;
}
@SuppressWarnings("rawtypes")
private void addStateStoreBeans(StreamsBuilder streamsBuilder) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
consumed, getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
consumed,
getMaterialized(storeName, k, v));
}
private GlobalKTable<?, ?> getGlobalKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.globalTable(bindingDestination,
consumed);
}
private KTable<?, ?> getKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
new Serdes.BytesSerde() : valueSerde;
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
final KTable<?, ?> kTable = materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerdeToUse, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.table(bindingDestination,
consumed);
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
AtomicBoolean matched = new AtomicBoolean();
AtomicReference<String> topicObject = new AtomicReference<>();
AtomicReference<Headers> headersObject = new AtomicReference<>();
final KStream<?, ?> stream = kTable.toStream();
// Processor to retrieve the header value.
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched, topicObject, headersObject));
// Branching based on event type match.
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
// Deserialize if we have a branch from above.
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(
topicObject.get(), headersObject.get(), ((Bytes) value).get()));
return deserializedKStream.toTable();
}
return kTable;
}
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
Serde<K> keySerde, Serde<V> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
TimestampExtractor timestampExtractor = null;
if (!StringUtils.isEmpty(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName())) {
timestampExtractor = applicationContext.getBean(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName(),
TimestampExtractor.class);
}
final Consumed<K, V> consumed = Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset);
if (timestampExtractor != null) {
consumed.withTimestampExtractor(timestampExtractor);
}
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getConsumedAs())) {
consumed.withName(kafkaStreamsConsumerProperties.getConsumedAs());
}
return consumed;
}
private <K, V> Processor<K, V, Void, Void> eventTypeProcessor(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
AtomicBoolean matched, AtomicReference<String> topicObject, AtomicReference<Headers> headersObject) {
return new Processor<K, V, Void, Void>() {
org.apache.kafka.streams.processor.api.ProcessorContext<?, ?> context;
@Override
public void init(org.apache.kafka.streams.processor.api.ProcessorContext<Void, Void> context) {
Processor.super.init(context);
this.context = context;
}
@Override
public void process(Record<K, V> record) {
final Headers headers = record.headers();
headersObject.set(headers);
final Optional<RecordMetadata> optional = this.context.recordMetadata();
if (optional.isPresent()) {
final RecordMetadata recordMetadata = optional.get();
topicObject.set(recordMetadata.topic());
}
final Iterable<Header> eventTypeHeader = headers.headers(kafkaStreamsConsumerProperties.getEventTypeHeaderKey());
if (eventTypeHeader != null && eventTypeHeader.iterator().hasNext()) {
String eventTypeFromHeader = new String(eventTypeHeader.iterator().next().value());
final String[] eventTypesFromBinding = StringUtils.commaDelimitedListToStringArray(kafkaStreamsConsumerProperties.getEventTypes());
for (String eventTypeFromBinding : eventTypesFromBinding) {
if (eventTypeFromHeader.equals(eventTypeFromBinding)) {
matched.set(true);
break;
}
}
}
}
@Override
public void close() {
}
};
}
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* Enumeration for various {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} types.
*
* @author Soby Chacko
* @since 3.0.0
*/
public enum DeserializationExceptionHandler {
/**
* Deserialization error handler with log and continue.
* See {@link org.apache.kafka.streams.errors.LogAndContinueExceptionHandler}
*/
logAndContinue,
/**
* Deserialization error handler with log and fail.
* See {@link org.apache.kafka.streams.errors.LogAndFailExceptionHandler}
*/
logAndFail,
/**
* Deserialization error handler with DLQ send.
* See {@link org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler}
*/
sendToDlq,
/**
* Deserialization error handler that silently skips the error and continue.
*/
skipAndContinue;
}

View File

@@ -0,0 +1,72 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.context.properties.ConfigurationPropertiesBindHandlerAdvisor;
import org.springframework.boot.context.properties.bind.AbstractBindHandler;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
/**
* {@link ConfigurationPropertiesBindHandlerAdvisor} to detect nativeEncoding/Decoding settings
* provided by the application explicitly.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class EncodingDecodingBindAdviceHandler implements ConfigurationPropertiesBindHandlerAdvisor {
private boolean encodingSettingProvided;
private boolean decodingSettingProvided;
public boolean isDecodingSettingProvided() {
return decodingSettingProvided;
}
public boolean isEncodingSettingProvided() {
return this.encodingSettingProvided;
}
@Override
public BindHandler apply(BindHandler bindHandler) {
BindHandler handler = new AbstractBindHandler(bindHandler) {
@Override
public <T> Bindable<T> onStart(ConfigurationPropertyName name,
Bindable<T> target, BindContext context) {
final String configName = name.toString();
if (configName.contains("use") && configName.contains("native") &&
(configName.contains("encoding") || configName.contains("decoding"))) {
BindResult<T> result = context.getBinder().bind(name, target);
if (result.isBound()) {
if (configName.contains("encoding")) {
EncodingDecodingBindAdviceHandler.this.encodingSettingProvided = true;
}
else {
EncodingDecodingBindAdviceHandler.this.decodingSettingProvided = true;
}
return target.withExistingValue(result.get());
}
}
return bindHandler.onStart(name, target, context);
}
};
return handler;
}
}

View File

@@ -0,0 +1,51 @@
/*
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
import org.springframework.cloud.stream.config.BindingHandlerAdvise.MappingsProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* {@link EnableAutoConfiguration Auto-configuration} for extended binding metadata for Kafka Streams.
*
* @author Chris Bono
* @since 3.2
*/
@Configuration(proxyBeanMethods = false)
public class ExtendedBindingHandlerMappingsProviderAutoConfiguration {
@Bean
public MappingsProvider kafkaStreamsExtendedPropertiesDefaultMappingsProvider() {
return () -> {
Map<ConfigurationPropertyName, ConfigurationPropertyName> mappings = new HashMap<>();
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.bindings"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
return mappings;
};
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,8 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -32,6 +32,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
@@ -54,20 +56,22 @@ public class GlobalKTableBinder extends
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
private final KafkaStreamsRegistry kafkaStreamsRegistry;
// @checkstyle:on
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
@@ -76,13 +80,55 @@ public class GlobalKTableBinder extends
String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
return new DefaultBinding<>(name, group, inputTarget, null);
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
return new DefaultBinding<GlobalKTable<Object, Object>>(bindingName, group, inputTarget, streamsBuilderFactoryBean) {
@Override
public boolean isInput() {
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
GlobalKTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
GlobalKTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
GlobalKTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@Override
@@ -118,4 +164,9 @@ public class GlobalKTableBinder extends
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -18,16 +18,20 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for GlobalKTable binder.
@@ -36,41 +40,55 @@ import org.springframework.context.annotation.Configuration;
* @since 2.1.0
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
MultiBinderPropertiesConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
KafkaStreamsJaasConfiguration.class})
public class GlobalKTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
return new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsRegistry kafkaStreamsRegistry) {
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
beanFactory.registerSingleton(
KafkaStreamsRegistry.class.getSimpleName(),
outerContext.getBean(KafkaStreamsRegistry.class));
};
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -23,6 +23,7 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
@@ -39,16 +40,31 @@ public class GlobalKTableBoundElementFactory
extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
@@ -60,7 +76,9 @@ public class GlobalKTableBoundElementFactory
GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
final GlobalKTable proxy = (GlobalKTable) proxyFactory.getProxy();
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@Override
@@ -85,8 +103,9 @@ public class GlobalKTableBoundElementFactory
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,17 +16,32 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.atomic.AtomicReference;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.StreamsMetadata;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
@@ -37,10 +52,14 @@ import org.springframework.util.StringUtils;
*
* @author Soby Chacko
* @author Renwei Han
* @author Serhii Siryi
* @author Nico Pommerening
* @since 2.1.0
*/
public class InteractiveQueryService {
private static final Log LOG = LogFactory.getLog(InteractiveQueryService.class);
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
@@ -64,18 +83,76 @@ public class InteractiveQueryService {
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
final RetryTemplate retryTemplate = getRetryTemplate();
KafkaStreams contextSpecificKafkaStreams = getThreadContextSpecificKafkaStreams();
return retryTemplate.execute(context -> {
T store = null;
Throwable throwable = null;
if (contextSpecificKafkaStreams != null) {
try {
store = contextSpecificKafkaStreams.store(
StoreQueryParameters.fromNameAndType(
storeName, storeType));
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
catch (InvalidStateStoreException ignored) {
// pass through
if (store != null) {
return store;
}
}
return null;
else if (contextSpecificKafkaStreams != null) {
LOG.warn("Store " + storeName
+ " could not be found in Streams context, falling back to all known Streams instances");
}
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
while (iterator.hasNext()) {
try {
store = iterator.next()
.store(StoreQueryParameters.fromNameAndType(
storeName, storeType));
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
if (store != null) {
return store;
}
throw new IllegalStateException(
"Error when retrieving state store: " + storeName,
throwable);
});
}
/**
* Retrieves the current {@link KafkaStreams} context if executing Thread is created by a Streams App (contains a matching application id in Thread's name).
*
* @return KafkaStreams instance associated with Thread
*/
private KafkaStreams getThreadContextSpecificKafkaStreams() {
return this.kafkaStreamsRegistry.getKafkaStreams().stream()
.filter(this::filterByThreadName).findAny().orElse(null);
}
/**
* Checks if the supplied {@link KafkaStreams} instance belongs to the calling Thread by matching the Thread's name with the Streams Application Id.
*
* @param streams {@link KafkaStreams} instance to filter
* @return true if Streams Instance is associated with Thread
*/
private boolean filterByThreadName(KafkaStreams streams) {
String applicationId = kafkaStreamsRegistry.streamBuilderFactoryBean(
streams).getStreamsConfiguration()
.getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
// TODO: is there some better way to find out if a Stream App created the Thread?
return Thread.currentThread().getName().contains(applicationId);
}
/**
@@ -106,22 +183,110 @@ public class InteractiveQueryService {
* through all the consumer instances under the same application id and retrieves the
* proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a
* Note that the end user applications must provide `application.server` as a
* configuration property for all the application instances when calling this method.
* If this is not available, then null maybe returned.
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link HostInfo} where the key for the provided store is hosted
* currently
* @return the {@link HostInfo} where the key for the provided store is hosted currently
*/
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
StreamsMetadata streamsMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
final RetryTemplate retryTemplate = getRetryTemplate();
return retryTemplate.execute(context -> {
Throwable throwable = null;
try {
final KeyQueryMetadata keyQueryMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
if (keyQueryMetadata != null) {
return keyQueryMetadata.activeHost();
}
}
catch (Exception e) {
throwable = e;
}
throw new IllegalStateException(
"Error when retrieving state store.", throwable != null ? throwable : new Throwable("Kafka Streams is not ready."));
});
}
private RetryTemplate getRetryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate;
}
/**
* Retrieves and returns the {@link KeyQueryMetadata} associated with the given combination of
* key and state store. If none found, it will return null.
*
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link KeyQueryMetadata} if available, null otherwise.
*/
public <K> KeyQueryMetadata getKeyQueryMetadata(String store, K key, Serializer<K> serializer) {
return this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map((k) -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
}
/**
* Retrieves and returns the {@link KafkaStreams} object that is associated with the given combination of
* key and state store. If none found, it will return null.
*
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return {@link KafkaStreams} object associated with this combination of store and key
*/
public <K> KafkaStreams getKafkaStreams(String store, K key, Serializer<K> serializer) {
final AtomicReference<KafkaStreams> kafkaStreamsAtomicReference = new AtomicReference<>();
this.kafkaStreamsRegistry.getKafkaStreams()
.forEach(k -> {
final KeyQueryMetadata keyQueryMetadata = k.queryMetadataForKey(store, key, serializer);
if (keyQueryMetadata != null) {
kafkaStreamsAtomicReference.set(k);
}
});
return kafkaStreamsAtomicReference.get();
}
/**
* Gets the list of {@link HostInfo} where the provided store is hosted on.
* It also can include current host info.
* Kafka Streams will look through all the consumer instances under the same application id
* and retrieves all hosts info.
*
* Note that the end-user applications must provide `application.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then an empty list will be returned.
*
* @param store store name
* @return the list of {@link HostInfo} where provided store is hosted on
*/
public List<HostInfo> getAllHostsInfo(String store) {
return kafkaStreamsRegistry.getKafkaStreams()
.stream()
.flatMap(k -> k.allMetadataForStore(store).stream())
.filter(Objects::nonNull)
.map(StreamsMetadata::hostInfo)
.collect(Collectors.toList());
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,14 +16,19 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import java.util.Properties;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.springframework.aop.framework.Advised;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
@@ -37,6 +42,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
@@ -73,76 +80,190 @@ class KStreamBinder extends
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
this.kafkaStreamsBindingInformationCatalogue
.registerConsumerProperties(inputTarget, properties.getExtension());
KStream<Object, Object> delegate = ((KStreamBoundElementFactory.KStreamWrapperHandler)
((Advised) inputTarget).getAdvisors()[0].getAdvice()).getDelegate();
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(delegate, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
return new DefaultBinding<>(name, group, inputTarget, null);
return new DefaultBinding<KStream<Object, Object>>(bindingName, group,
inputTarget, streamsBuilderFactoryBean) {
@Override
public boolean isInput() {
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name,
KStream<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name,
extendedProducerProperties);
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties =
(ExtendedProducerProperties) properties;
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver
.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension());
.getOuboundKeySerde(properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable(outboundBindTarget));
LOG.info("Key Serde used for (outbound) " + name + ": " + keySerde.getClass().getName());
Serde<?> valueSerde;
if (properties.isUseNativeEncoding()) {
valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable(outboundBindTarget));
}
else {
valueSerde = Serdes.ByteArray();
}
LOG.info("Value Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
(Serde<Object>) keySerde, (Serde<Object>) valueSerde);
return new DefaultBinding<>(name, null, outboundBindTarget, null);
(Serde<Object>) keySerde, (Serde<Object>) valueSerde, properties.getExtension());
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(outboundBindTarget);
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
// We need the application id to pass to DefaultBinding so that it won't be interpreted as an anonymous group.
// In case, if we can't find application.id (which is unlikely), we just default to bindingName.
// This will only be used for lifecycle management through actuator endpoints.
final Properties streamsConfiguration = streamsBuilderFactoryBean.getStreamsConfiguration();
final String applicationId = streamsConfiguration != null ? (String) streamsConfiguration.get("application.id") : bindingName;
return new DefaultBinding<KStream<Object, Object>>(bindingName,
applicationId, outboundBindTarget, streamsBuilderFactoryBean) {
@Override
public boolean isInput() {
return false;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@SuppressWarnings("unchecked")
private void to(boolean isNativeEncoding, String name,
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde) {
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde, KafkaStreamsProducerProperties properties) {
final Produced<Object, Object> produced = Produced.with(keySerde, valueSerde);
if (StringUtils.hasText(properties.getProducedAs())) {
produced.withName(properties.getProducedAs());
}
StreamPartitioner streamPartitioner = null;
if (!StringUtils.isEmpty(properties.getStreamPartitionerBeanName())) {
streamPartitioner = getApplicationContext().getBean(properties.getStreamPartitionerBeanName(),
StreamPartitioner.class);
}
if (streamPartitioner != null) {
produced.withStreamPartitioner(streamPartitioner);
}
if (!isNativeEncoding) {
LOG.info("Native encoding is disabled for " + name
+ ". Outbound message conversion done by Spring Cloud Stream.");
outboundBindTarget.filter((k, v) -> v == null)
.to(name, produced);
this.kafkaStreamsMessageConversionDelegate
.serializeOnOutbound(outboundBindTarget)
.to(name, Produced.with(keySerde, valueSerde));
.to(name, produced);
}
else {
LOG.info("Native encoding is enabled for " + name
+ ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
outboundBindTarget.to(name, produced);
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,13 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
@@ -40,15 +39,17 @@ import org.springframework.context.annotation.Import;
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class })
MultiBinderPropertiesConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
KafkaStreamsJaasConfiguration.class})
public class KStreamBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
return new KafkaTopicProvisioner(kafkaStreamsBinderConfigurationProperties,
kafkaProperties);
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@Bean
@@ -58,12 +59,10 @@ public class KStreamBinderConfiguration {
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties, KafkaStreamsRegistry kafkaStreamsRegistry) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver,
kafkaStreamsDlqDispatchers);
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver, kafkaStreamsRegistry);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return kStreamBinder;
@@ -79,10 +78,6 @@ public class KStreamBinderConfiguration {
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsMessageConversionDelegate.class.getSimpleName(),
outerContext.getBean(KafkaStreamsMessageConversionDelegate.class));
@@ -94,6 +89,9 @@ public class KStreamBinderConfiguration {
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
beanFactory.registerSingleton(
KafkaStreamsRegistry.class.getSimpleName(),
outerContext.getBean(KafkaStreamsRegistry.class));
};
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -22,6 +22,7 @@ import org.apache.kafka.streams.kstream.KStream;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
@@ -42,18 +43,30 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
public KStream createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
@@ -62,6 +75,18 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
@SuppressWarnings("unchecked")
public KStream createOutput(final String name) {
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ProducerProperties producerProperties = bindingProperties.getProducer();
if (producerProperties == null) {
producerProperties = this.bindingServiceProperties.getProducerProperties(name);
producerProperties.setUseNativeEncoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isEncodingSettingProvided()) {
producerProperties.setUseNativeEncoding(true);
}
}
return createProxyForKStream(name);
}
@@ -78,6 +103,7 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy,
bindingProperties);
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@@ -90,15 +116,16 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
private static class KStreamWrapperHandler
static class KStreamWrapperHandler
implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
public void wrap(KStream<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
@@ -121,6 +148,9 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
}
KStream<Object, Object> getDelegate() {
return delegate;
}
}
}

View File

@@ -1,67 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
/**
* {@link StreamListenerParameterAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerParameterAdapter
implements StreamListenerParameterAdapter<KStream<?, ?>, KStream<?, ?>> {
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
KStreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
public boolean supports(Class bindingTargetType, MethodParameter methodParameter) {
return KStream.class.isAssignableFrom(bindingTargetType)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
@Override
@SuppressWarnings("unchecked")
public KStream adapt(KStream<?, ?> bindingTarget, MethodParameter parameter) {
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
if (this.KafkaStreamsBindingInformationCatalogue
.isUseNativeDecoding(bindingTarget)) {
return bindingTarget;
}
else {
return this.kafkaStreamsMessageConversionDelegate
.deserializeOnInbound(valueClass, bindingTarget);
}
}
}

View File

@@ -1,58 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.io.Closeable;
import java.io.IOException;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
/**
* {@link StreamListenerResultAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerResultAdapter implements
StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
return KStream.class.isAssignableFrom(resultType)
&& KStream.class.isAssignableFrom(boundElement);
}
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult,
KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult);
return new NoOpCloseable();
}
private static final class NoOpCloseable implements Closeable {
@Override
public void close() throws IOException {
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,8 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -32,6 +32,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
@@ -44,30 +46,30 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
*/
class KTableBinder extends
// @checkstyle:off
AbstractBinder<KTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements
ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
// @checkstyle:on
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
@@ -78,21 +80,62 @@ class KTableBinder extends
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
return new DefaultBinding<>(name, group, inputTarget, null);
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
return new DefaultBinding<KTable<Object, Object>>(bindingName, group, inputTarget, streamsBuilderFactoryBean) {
@Override
public boolean isInput() {
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@Override
protected Binding<KTable<Object, Object>> doBindProducer(String name,
KTable<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
throw new UnsupportedOperationException(
"No producer level binding is allowed for KTable");
}
@@ -122,4 +165,8 @@ class KTableBinder extends
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -18,16 +18,20 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for KTable binder.
@@ -36,42 +40,54 @@ import org.springframework.context.annotation.Configuration;
*/
@SuppressWarnings("ALL")
@Configuration
@Import({ KafkaAutoConfiguration.class,
MultiBinderPropertiesConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
KafkaStreamsJaasConfiguration.class})
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@Bean
public KTableBinder kTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
return kStreamBinder;
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsRegistry kafkaStreamsRegistry) {
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
beanFactory.registerSingleton(
KafkaStreamsRegistry.class.getSimpleName(),
outerContext.getBean(KafkaStreamsRegistry.class));
};
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -23,6 +23,7 @@ import org.apache.kafka.streams.kstream.KTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
@@ -37,16 +38,31 @@ import org.springframework.util.Assert;
class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super(KTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
public KTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
@@ -55,7 +71,9 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
KTableBoundElementFactory.KTableWrapper.class, KTable.class);
proxyFactory.addAdvice(wrapper);
return (KTable) proxyFactory.getProxy();
final KTable proxy = (KTable) proxyFactory.getProxy();
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@Override
@@ -81,8 +99,9 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
public void wrap(KTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override

View File

@@ -1,48 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Application support configuration for Kafka Streams binder.
*
* @deprecated Features provided on this class can be directly configured in the application itself using Kafka Streams.
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@Deprecated
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
public TimeWindows configuredTimeWindow(
KafkaStreamsApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength())
.advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,52 +16,182 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.time.Duration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.TaskMetadata;
import org.apache.kafka.streams.ThreadMetadata;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Health indicator for Kafka Streams.
*
* @author Arnaud Jardiné
* @author Soby Chacko
*/
class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator implements DisposableBean {
/**
* Static initialization for detecting whether the application is using Kafka client 2.5 vs lower versions.
*/
private static ClassLoader CLASS_LOADER = KafkaStreamsBinderHealthIndicator.class.getClassLoader();
private static boolean isKafkaStreams25 = true;
private static Method methodForIsRunning;
static {
try {
Class<?> KAFKA_STREAMS_STATE_CLASS = CLASS_LOADER.loadClass("org.apache.kafka.streams.KafkaStreams$State");
Method[] declaredMethods = KAFKA_STREAMS_STATE_CLASS.getDeclaredMethods();
for (Method m : declaredMethods) {
if (m.getName().equals("isRunning")) {
isKafkaStreams25 = false;
methodForIsRunning = m;
}
}
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("KafkaStreams$State class not found", e);
}
}
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry) {
private final KafkaStreamsBinderConfigurationProperties configurationProperties;
private final Map<String, Object> adminClientProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private AdminClient adminClient;
private final Lock lock = new ReentrantLock();
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super("Kafka-streams health check failed");
kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaStreamsBinderConfigurationProperties;
this.adminClientProperties = kafkaProperties.buildAdminProperties();
KafkaTopicProvisioner.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaStreamsBinderConfigurationProperties);
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
up &= kStream.state().isRunning();
builder.withDetails(buildDetails(kStream));
try {
this.lock.lock();
if (this.adminClient == null) {
this.adminClient = AdminClient.create(this.adminClientProperties);
}
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
builder.status(Status.UNKNOWN);
}
else {
boolean up = true;
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
Set<KafkaStreams> allKafkaStreams = new HashSet<>(kafkaStreams);
if (this.configurationProperties.isIncludeStoppedProcessorsForHealthCheck()) {
allKafkaStreams.addAll(kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().values());
}
for (KafkaStreams kStream : allKafkaStreams) {
if (isKafkaStreams25) {
up &= kStream.state().isRunningOrRebalancing();
}
else {
// if Kafka client version is lower than 2.5, then call the method reflectively.
final boolean isRunningInvokedResult = (boolean) methodForIsRunning.invoke(kStream.state());
up &= isRunningInvokedResult;
}
builder.withDetails(buildDetails(kStream));
}
builder.status(up ? Status.UP : Status.DOWN);
}
}
catch (Exception e) {
builder.withDetail("No topic information available", "Kafka broker is not reachable");
builder.status(Status.DOWN);
builder.withException(e);
}
finally {
this.lock.unlock();
}
builder.status(up ? Status.UP : Status.DOWN);
}
private static Map<String, Object> buildDetails(KafkaStreams kStreams) {
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) throws Exception {
final Map<String, Object> details = new HashMap<>();
if (kStreams.state().isRunning()) {
for (ThreadMetadata metadata : kStreams.localThreadsMetadata()) {
details.put("threadName", metadata.threadName());
details.put("threadState", metadata.threadState());
details.put("activeTasks", taskDetails(metadata.activeTasks()));
details.put("standbyTasks", taskDetails(metadata.standbyTasks()));
final Map<String, Object> perAppdIdDetails = new HashMap<>();
boolean isRunningResult;
if (isKafkaStreams25) {
isRunningResult = kafkaStreams.state().isRunningOrRebalancing();
}
else {
// if Kafka client version is lower than 2.5, then call the method reflectively.
isRunningResult = (boolean) methodForIsRunning.invoke(kafkaStreams.state());
}
if (isRunningResult) {
final Set<ThreadMetadata> threadMetadata = kafkaStreams.metadataForLocalThreads();
for (ThreadMetadata metadata : threadMetadata) {
perAppdIdDetails.put("threadName", metadata.threadName());
perAppdIdDetails.put("threadState", metadata.threadState());
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
perAppdIdDetails.put("consumerClientId", metadata.consumerClientId());
perAppdIdDetails.put("restoreConsumerClientId", metadata.restoreConsumerClientId());
perAppdIdDetails.put("producerClientIds", metadata.producerClientIds());
perAppdIdDetails.put("activeTasks", taskDetails(metadata.activeTasks()));
perAppdIdDetails.put("standbyTasks", taskDetails(metadata.standbyTasks()));
}
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, perAppdIdDetails);
}
else {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
String applicationId = null;
if (streamsBuilderFactoryBean != null) {
applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
}
else {
final Map<String, KafkaStreams> stoppedKafkaStreamsPerBinding = kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams();
for (String appId : stoppedKafkaStreamsPerBinding.keySet()) {
if (stoppedKafkaStreamsPerBinding.get(appId).equals(kafkaStreams)) {
applicationId = appId;
}
}
}
details.put(applicationId, String.format("The processor with application.id %s is down. Current state: %s", applicationId, kafkaStreams.state()));
}
return details;
}
@@ -70,12 +200,29 @@ class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
final Map<String, Object> details = new HashMap<>();
for (TaskMetadata metadata : taskMetadata) {
details.put("taskId", metadata.taskId());
details.put("partitions",
metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList()));
if (details.containsKey("partitions")) {
@SuppressWarnings("unchecked")
List<String> partitionsInfo = (List<String>) details.get("partitions");
partitionsInfo.addAll(addPartitionsInfo(metadata));
}
else {
details.put("partitions",
addPartitionsInfo(metadata));
}
}
return details;
}
private static List<String> addPartitionsInfo(TaskMetadata metadata) {
return metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList());
}
@Override
public void destroy() throws Exception {
if (adminClient != null) {
adminClient.close(Duration.ofSeconds(0));
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,9 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -30,13 +33,18 @@ import org.springframework.context.annotation.Configuration;
@Configuration
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
@ConditionalOnEnabledHealthIndicator("binders")
class KafkaStreamsBinderHealthIndicatorConfiguration {
public class KafkaStreamsBinderHealthIndicatorConfiguration {
@Bean
@ConditionalOnBean(KafkaStreamsRegistry.class)
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry);
public KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
ObjectProvider<KafkaStreamsRegistry> kafkaStreamsRegistry,
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties, KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
if (kafkaStreamsRegistry.getIfUnique() != null) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry.getIfUnique(), kafkaStreamsBinderConfigurationProperties,
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
}
return null;
}
}

View File

@@ -0,0 +1,240 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.function.ToDoubleFunction;
import io.micrometer.core.instrument.FunctionCounter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.apache.kafka.common.Metric;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.streams.KafkaStreams;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Kafka Streams binder metrics implementation that exports the metrics available
* through {@link KafkaStreams#metrics()} into a micrometer {@link io.micrometer.core.instrument.MeterRegistry}.
*
* Boot 2.2 users need to rely on this class for the metrics instead of direct support from Micrometer.
* Micrometer added Kafka Streams metrics support in 1.4.0 which Boot 2.3 includes.
* Therefore, the users who are on Boot 2.2, need to rely on these metrics.
* For users who are on 2.3 Boot, this class won't be activated (See the configuration for the various
* conditionals used).
*
* For the most part, this class is a copy of the Micrometer Kafka Streams support that was added in version 1.4.0.
* We will keep this class, as long as we support Boot 2.2.x.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBinderMetrics {
static final String DEFAULT_VALUE = "unknown";
static final String CLIENT_ID_TAG_NAME = "client-id";
static final String METRIC_GROUP_APP_INFO = "app-info";
static final String VERSION_METRIC_NAME = "version";
static final String START_TIME_METRIC_NAME = "start-time-ms";
static final String KAFKA_VERSION_TAG_NAME = "kafka-version";
static final String METRIC_NAME_PREFIX = "kafka.";
static final String METRIC_GROUP_METRICS_COUNT = "kafka-metrics-count";
private String kafkaVersion = DEFAULT_VALUE;
private String clientId = DEFAULT_VALUE;
private final MeterRegistry meterRegistry;
private MeterBinder meterBinder;
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private volatile Set<MetricName> currentMeters = new HashSet<>();
public KafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
public void bindTo(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
if (this.meterBinder == null) {
this.meterBinder = registry -> {
if (streamsBuilderFactoryBeans != null) {
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
if (streamsBuilderFactoryBean.isRunning()) {
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
prepareToBindMetrics(registry, metrics);
checkAndBindMetrics(registry, metrics);
}
}
}
};
}
this.meterBinder.bindTo(this.meterRegistry);
}
public void addMetrics(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
synchronized (KafkaStreamsBinderMetrics.this) {
this.bindTo(streamsBuilderFactoryBeans);
}
}
void prepareToBindMetrics(MeterRegistry registry, Map<MetricName, ? extends Metric> metrics) {
Metric startTime = null;
for (Map.Entry<MetricName, ? extends Metric> entry : metrics.entrySet()) {
MetricName name = entry.getKey();
if (clientId.equals(DEFAULT_VALUE) && name.tags().get(CLIENT_ID_TAG_NAME) != null) {
clientId = name.tags().get(CLIENT_ID_TAG_NAME);
}
if (METRIC_GROUP_APP_INFO.equals(name.group())) {
if (VERSION_METRIC_NAME.equals(name.name())) {
kafkaVersion = (String) entry.getValue().metricValue();
}
else if (START_TIME_METRIC_NAME.equals(name.name())) {
startTime = entry.getValue();
}
}
}
if (startTime != null) {
bindMeter(registry, startTime, meterName(startTime), meterTags(startTime));
}
}
private void bindMeter(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
if (name.endsWith("total") || name.endsWith("count")) {
registerCounter(registry, metric, name, tags);
}
else {
registerGauge(registry, metric, name, tags);
}
}
private void registerCounter(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
FunctionCounter.builder(name, metric, toMetricValue())
.tags(tags)
.description(metric.metricName().description())
.register(registry);
}
private ToDoubleFunction<Metric> toMetricValue() {
return metric -> ((Number) metric.metricValue()).doubleValue();
}
private void registerGauge(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
Gauge.builder(name, metric, toMetricValue())
.tags(tags)
.description(metric.metricName().description())
.register(registry);
}
private List<Tag> meterTags(Metric metric) {
return meterTags(metric, false);
}
private String meterName(Metric metric) {
String name = METRIC_NAME_PREFIX + metric.metricName().group() + "." + metric.metricName().name();
return name.replaceAll("-metrics", "").replaceAll("-", ".");
}
private List<Tag> meterTags(Metric metric, boolean includeCommonTags) {
List<Tag> tags = new ArrayList<>();
metric.metricName().tags().forEach((key, value) -> tags.add(Tag.of(key, value)));
tags.add(Tag.of(KAFKA_VERSION_TAG_NAME, kafkaVersion));
return tags;
}
private boolean differentClient(List<Tag> tags) {
for (Tag tag : tags) {
if (tag.getKey().equals(CLIENT_ID_TAG_NAME)) {
if (!clientId.equals(tag.getValue())) {
return true;
}
}
}
return false;
}
void checkAndBindMetrics(MeterRegistry registry, Map<MetricName, ? extends Metric> metrics) {
if (!currentMeters.equals(metrics.keySet())) {
currentMeters = new HashSet<>(metrics.keySet());
metrics.forEach((name, metric) -> {
//Filter out non-numeric values
if (!(metric.metricValue() instanceof Number)) {
return;
}
//Filter out metrics from groups that include metadata
if (METRIC_GROUP_APP_INFO.equals(name.group())) {
return;
}
if (METRIC_GROUP_METRICS_COUNT.equals(name.group())) {
return;
}
String meterName = meterName(metric);
List<Tag> meterTagsWithCommonTags = meterTags(metric, true);
//Kafka has metrics with lower number of tags (e.g. with/without topic or partition tag)
//Remove meters with lower number of tags
boolean hasLessTags = false;
for (Meter other : registry.find(meterName).meters()) {
List<Tag> tags = other.getId().getTags();
// Only consider meters from the same client before filtering
if (differentClient(tags)) {
break;
}
if (tags.size() < meterTagsWithCommonTags.size()) {
registry.remove(other);
}
// Check if already exists
else if (tags.size() == meterTagsWithCommonTags.size()) {
if (tags.equals(meterTagsWithCommonTags)) {
return;
}
else {
break;
}
}
else {
hasLessTags = true;
}
}
if (hasLessTags) {
return;
}
bindMeter(registry, metric, meterName, meterTags(metric));
});
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,46 +16,63 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Collection;
import java.lang.reflect.Constructor;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingClass;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.Binder;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.integration.context.IntegrationContextUtils;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.lang.Nullable;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
import org.springframework.util.StringUtils;
/**
@@ -65,6 +82,7 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Gary Russell
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
@ConditionalOnBean(BindingService.class)
@AutoConfigureAfter(BindingServiceConfiguration.class)
@@ -76,11 +94,14 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
private static final String GLOBALKTABLE_BINDER_TYPE = "globalktable";
private static final String CONSUMER_PROPERTIES_PREFIX = "consumer.";
private static final String PRODUCER_PROPERTIES_PREFIX = "producer.";
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(
KafkaProperties kafkaProperties, ConfigurableEnvironment environment,
BindingServiceProperties properties) {
BindingServiceProperties properties, ConfigurableApplicationContext context) throws Exception {
final Map<String, BinderConfiguration> binderConfigurations = getBinderConfigurations(
properties);
for (Map.Entry<String, BinderConfiguration> entry : binderConfigurations
@@ -93,7 +114,19 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
Map<String, Object> binderProperties = new HashMap<>();
this.flatten(null, binderConfiguration.getProperties(), binderProperties);
environment.getPropertySources().addFirst(
new MapPropertySource("kafkaStreamsBinderEnv", binderProperties));
new MapPropertySource(entry.getKey() + "-kafkaStreamsBinderEnv", binderProperties));
Binder binder = new Binder(ConfigurationPropertySources.get(environment),
new PropertySourcesPlaceholdersResolver(environment),
IntegrationUtils.getConversionService(context.getBeanFactory()), null);
final Constructor<KafkaStreamsBinderConfigurationProperties> kafkaStreamsBinderConfigurationPropertiesConstructor =
ReflectionUtils.accessibleConstructor(KafkaStreamsBinderConfigurationProperties.class, KafkaProperties.class);
final KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties =
BeanUtils.instantiateClass(kafkaStreamsBinderConfigurationPropertiesConstructor, kafkaProperties);
final BindResult<KafkaStreamsBinderConfigurationProperties> bind = binder.bind("spring.cloud.stream.kafka.streams.binder", Bindable.ofInstance(kafkaStreamsBinderConfigurationProperties));
context.getBeanFactory().registerSingleton(
entry.getKey() + "-KafkaStreamsBinderConfigurationProperties",
bind.get());
}
}
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
@@ -134,7 +167,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(
KafkaStreamsBinderConfigurationProperties properties,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties properties,
Environment environment) {
KafkaProperties kafkaProperties = properties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
@@ -150,15 +183,32 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(
KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment,
SendToDlqAndContinue sendToDlqAndContinue) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
String kafkaConnectionString = configProperties.getKafkaConnectionString();
if (kafkaConnectionString != null && kafkaConnectionString.equals("localhost:9092")) {
//Making sure that the application indeed set a property.
String kafkaStreamsBinderBroker = environment.getProperty("spring.cloud.stream.kafka.streams.binder.brokers");
if (StringUtils.isEmpty(kafkaStreamsBinderBroker)) {
//Kafka Streams binder specific property for brokers is not set by the application.
//See if there is one configured at the kafka binder level.
String kafkaBinderBroker = environment.getProperty("spring.cloud.stream.kafka.binder.brokers");
if (!StringUtils.isEmpty(kafkaBinderBroker)) {
kafkaConnectionString = kafkaBinderBroker;
configProperties.setBrokers(kafkaConnectionString);
}
}
}
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
else {
Object bootstrapServerConfig = properties
@@ -169,7 +219,14 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
}
else if (bootstrapServerConfig instanceof List) {
List bootStrapCollection = (List) bootstrapServerConfig;
if (bootStrapCollection.size() == 1 && bootStrapCollection.get(0).equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
}
}
@@ -186,109 +243,87 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
Serdes.ByteArraySerde.class.getName());
if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class.getName());
LogAndContinueExceptionHandler.class);
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class.getName());
LogAndFailExceptionHandler.class);
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class.getName());
RecoveringDeserializationExceptionHandler.class);
properties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
properties.putAll(configProperties.getConfiguration());
}
Map<String, Object> mergedConsumerConfig = new HashMap<>(configProperties.mergedConsumerConfiguration());
//Adding consumer. prefix if they are missing (in order to differentiate them from other property categories such as stream, producer etc.)
addPrefix(properties, mergedConsumerConfig, CONSUMER_PROPERTIES_PREFIX);
Map<String, Object> mergedProducerConfig = new HashMap<>(configProperties.mergedProducerConfiguration());
//Adding producer. prefix if they are missing (in order to differentiate them from other property categories such as stream, consumer etc.)
addPrefix(properties, mergedProducerConfig, PRODUCER_PROPERTIES_PREFIX);
if (!properties.containsKey(StreamsConfig.REPLICATION_FACTOR_CONFIG)) {
properties.put(StreamsConfig.REPLICATION_FACTOR_CONFIG,
(int) configProperties.getReplicationFactor());
}
return properties.entrySet().stream().collect(
Collectors.toMap((e) -> String.valueOf(e.getKey()), Map.Entry::getValue));
}
@Bean
public KStreamStreamListenerResultAdapter kstreamStreamListenerResultAdapter() {
return new KStreamStreamListenerResultAdapter();
}
@Bean
public KStreamStreamListenerParameterAdapter kstreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(
kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
public KafkaStreamsStreamListenerSetupMethodOrchestrator kafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
ObjectProvider<CleanupConfig> cleanupConfig) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
cleanupConfig.getIfUnique());
}
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
FunctionCatalog functionCatalog, BindableProxyFactory bindableProxyFactory) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), functionCatalog, bindableProxyFactory);
private void addPrefix(Properties properties, Map<String, Object> mergedConsProdConfig, String prefix) {
Map<String, Object> mergedConfigs = new HashMap<>();
for (String key : mergedConsProdConfig.keySet()) {
mergedConfigs.put(key.startsWith(prefix) ? key : prefix + key, mergedConsProdConfig.get(key));
}
if (!ObjectUtils.isEmpty(mergedConfigs)) {
properties.putAll(mergedConfigs);
}
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverterFactory, sendToDlqAndContinue,
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverter, sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
return new KStreamBoundElementFactory(bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue);
KafkaStreamsBindingInformationCatalogue, encodingDecodingBindAdviceHandler);
}
@Bean
public KTableBoundElementFactory kTableBoundElementFactory(
BindingServiceProperties bindingServiceProperties) {
return new KTableBoundElementFactory(bindingServiceProperties);
BindingServiceProperties bindingServiceProperties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KTableBoundElementFactory(bindingServiceProperties, encodingDecodingBindAdviceHandler, KafkaStreamsBindingInformationCatalogue);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(
BindingServiceProperties properties) {
return new GlobalKTableBoundElementFactory(properties);
BindingServiceProperties properties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new GlobalKTableBoundElementFactory(properties, encodingDecodingBindAdviceHandler, KafkaStreamsBindingInformationCatalogue);
}
@Bean
@@ -306,21 +341,15 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@ConditionalOnMissingBean
public KeyValueSerdeResolver keyValueSerdeResolver(
@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties properties) {
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new KeyValueSerdeResolver(
(Map<String, Object>) streamConfigGlobalProperties, properties);
}
@Bean
public QueryableStoreRegistry queryableStoreTypeRegistry(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new QueryableStoreRegistry(kafkaStreamsRegistry);
}
@Bean
public InteractiveQueryService interactiveQueryServices(
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties properties) {
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new InteractiveQueryService(kafkaStreamsRegistry, properties);
}
@@ -332,13 +361,84 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry);
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
@Nullable KafkaStreamsMicrometerListener listener, KafkaProperties kafkaProperties) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener, kafkaProperties);
}
@Bean("kafkaStreamsDlqDispatchers")
public Map<String, KafkaStreamsDlqDispatch> dlqDispatchers() {
return new HashMap<>();
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
StreamFunctionProperties streamFunctionProperties,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
ObjectProvider<StreamsBuilderFactoryBeanConfigurer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), streamFunctionProperties, kafkaStreamsBinderConfigurationProperties,
customizerProvider.getIfUnique(), environment);
}
@Bean
public EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler() {
return new EncodingDecodingBindAdviceHandler();
}
@Configuration
@ConditionalOnMissingBean(value = KafkaStreamsBinderMetrics.class, name = "outerContext")
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfiguration {
@Bean
@ConditionalOnBean(MeterRegistry.class)
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
@ConditionalOnMissingClass("org.springframework.kafka.core.MicrometerConsumerListener")
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
return new KafkaStreamsBinderMetrics(meterRegistry);
}
@ConditionalOnClass(name = "org.springframework.kafka.core.MicrometerConsumerListener")
@ConditionalOnBean(MeterRegistry.class)
protected class KafkaMicrometer {
@Bean
@ConditionalOnMissingBean(name = "binderStreamsListener")
public KafkaStreamsMicrometerListener binderStreamsListener(MeterRegistry meterRegistry) {
return new KafkaStreamsMicrometerListener(meterRegistry);
}
}
}
@Configuration
@ConditionalOnBean(name = "outerContext")
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfigurationWithMultiBinder {
@Bean
@ConditionalOnMissingClass("org.springframework.kafka.core.MicrometerConsumerListener")
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(ConfigurableApplicationContext context) {
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
.getBean(MeterRegistry.class);
return new KafkaStreamsBinderMetrics(meterRegistry);
}
@ConditionalOnClass(name = "org.springframework.kafka.core.MicrometerConsumerListener")
@ConditionalOnBean(MeterRegistry.class)
protected class KafkaMicrometer {
@Bean
@ConditionalOnMissingBean(name = "binderStreamsListener")
public KafkaStreamsMicrometerListener binderStreamsListener(MeterRegistry meterRegistry) {
return new KafkaStreamsMicrometerListener(meterRegistry);
}
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,36 +16,82 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.function.BiFunction;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.context.ApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaOperations;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Common methods used by various Kafka Streams types across the binders.
*
* @author Soby Chacko
* @author Gary Russell
*/
final class KafkaStreamsBinderUtils {
private static final Log LOGGER = LogFactory.getLog(KafkaStreamsBinderUtils.class);
private KafkaStreamsBinderUtils() {
}
static void prepareConsumerBinding(String name, String group,
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
RetryTemplate retryTemplate,
ConfigurableListableBeanFactory beanFactory, String bindingName,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties =
(ExtendedConsumerProperties) properties;
if (binderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
// check for deserialization handler at the consumer binding, as that takes precedence.
final DeserializationExceptionHandler deserializationExceptionHandler =
properties.getExtension().getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
@@ -56,29 +102,125 @@ final class KafkaStreamsBinderUtils {
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils
Map<String, DlqPartitionFunction> partitionFunctions =
context.getBeansOfType(DlqPartitionFunction.class, false, false);
boolean oneFunctionPresent = partitionFunctions.size() == 1;
Integer dlqPartitions = extendedConsumerProperties.getExtension().getDlqPartitions();
DlqPartitionFunction partitionFunction = oneFunctionPresent
? partitionFunctions.values().iterator().next()
: DlqPartitionFunction.determineFallbackFunction(dlqPartitions, LOGGER);
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
extendedConsumerProperties.getExtension().getDlqProducerProperties()),
binderConfigurationProperties);
kafkaStreamsBindingInformationCatalogue.addDlqProducerFactory(streamsBuilderFactoryBean, producerFactory);
KafkaOperations<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
Map<String, DlqDestinationResolver> dlqDestinationResolvers =
context.getBeansOfType(DlqDestinationResolver.class, false, false);
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver =
dlqDestinationResolvers.isEmpty() ? (cr, e) -> new TopicPartition(extendedConsumerProperties.getExtension().getDlqName(),
partitionFunction.apply(group, cr, e)) :
(cr, e) -> new TopicPartition(dlqDestinationResolvers.values().iterator().next().apply(cr, e),
partitionFunction.apply(group, cr, e));
DeadLetterPublishingRecoverer kafkaStreamsBinderDlqRecoverer = !dlqDestinationResolvers.isEmpty() || !StringUtils
.isEmpty(extendedConsumerProperties.getExtension().getDlqName())
? new KafkaStreamsDlqDispatch(
extendedConsumerProperties.getExtension()
.getDlqName(),
binderConfigurationProperties,
extendedConsumerProperties.getExtension())
: null;
? new DeadLetterPublishingRecoverer(kafkaTemplate, destinationResolver)
: null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(
extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName,
binderConfigurationProperties,
extendedConsumerProperties.getExtension());
extendedConsumerProperties.getExtension().getDlqName()) && dlqDestinationResolvers.isEmpty()) {
destinationResolver = (cr, e) -> new TopicPartition("error." + inputTopic + "." + group,
partitionFunction.apply(group, cr, e));
kafkaStreamsBinderDlqRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
destinationResolver);
}
SendToDlqAndContinue sendToDlqAndContinue = context
.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic,
kafkaStreamsDlqDispatch);
kafkaStreamsBinderDlqRecoverer);
}
}
kafkaStreamsDlqDispatchers.put(inputTopic, kafkaStreamsDlqDispatch);
if (!StringUtils.hasText(properties.getRetryTemplateName())) {
@SuppressWarnings("unchecked")
BeanDefinition retryTemplateBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<RetryTemplate>) retryTemplate.getClass(),
() -> retryTemplate)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(bindingName + "-RetryTemplate", retryTemplateBeanDefinition);
}
}
private static DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
Map<String, String> configs = producerProperties.getExtension().getConfiguration();
Assert.state(!configs.containsKey(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG),
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
if (!ObjectUtils.isEmpty(configs)) {
props.putAll(configs);
}
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
static boolean supportsKStream(MethodParameter methodParameter, Class<?> targetBeanClass) {
return KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
static void closeDlqProducerFactories(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
final List<ProducerFactory<byte[], byte[]>> dlqProducerFactories =
kafkaStreamsBindingInformationCatalogue.getDlqProducerFactory(streamsBuilderFactoryBean);
if (!CollectionUtils.isEmpty(dlqProducerFactories)) {
for (ProducerFactory<byte[], byte[]> producerFactory : dlqProducerFactories) {
try {
((DisposableBean) producerFactory).destroy();
}
catch (Exception exception) {
throw new IllegalStateException(exception);
}
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,35 +16,56 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.util.CollectionUtils;
/**
* A catalogue that provides binding information for Kafka Streams target types such as
* KStream. It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean}
* and {@link StreamsConfig} associated with various
* {@link org.springframework.cloud.stream.annotation.StreamListener} methods in the
* Kafka Streams functions in the
* {@link org.springframework.context.ApplicationContext}.
*
* @author Soby Chacko
*/
class KafkaStreamsBindingInformationCatalogue {
public class KafkaStreamsBindingInformationCatalogue {
private final Map<KStream<?, ?>, BindingProperties> bindingProperties = new ConcurrentHashMap<>();
private final Map<KStream<?, ?>, KafkaStreamsConsumerProperties> consumerProperties = new ConcurrentHashMap<>();
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> consumerPropertiesPerSbfb = new HashMap<>();
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
private final Map<Object, String> bindingNamesPerTarget = new HashMap<>();
private final Map<String, KafkaStreams> previousKafkaStreamsPerApplicationId = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ProducerFactory<byte[], byte[]>>> dlqProducerFactories = new HashMap<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
@@ -96,7 +117,9 @@ class KafkaStreamsBindingInformationCatalogue {
*/
void registerBindingProperties(KStream<?, ?> bindingTarget,
BindingProperties bindingProperties) {
this.bindingProperties.put(bindingTarget, bindingProperties);
if (bindingProperties != null) {
this.bindingProperties.put(bindingTarget, bindingProperties);
}
}
/**
@@ -106,20 +129,122 @@ class KafkaStreamsBindingInformationCatalogue {
*/
void registerConsumerProperties(KStream<?, ?> bindingTarget,
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
}
/**
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}.
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean}
* mapped to the KStream
*/
void addStreamBuilderFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
this.streamsBuilderFactoryBeans.add(streamsBuilderFactoryBean);
if (kafkaStreamsConsumerProperties != null) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
}
}
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
return this.streamsBuilderFactoryBeans;
return new HashSet<>(this.streamsBuilderFactoryBeanPerBinding.values());
}
void addStreamBuilderFactoryPerBinding(String binding, StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
}
void addConsumerPropertiesPerSbfb(StreamsBuilderFactoryBean streamsBuilderFactoryBean, ConsumerProperties consumerProperties) {
this.consumerPropertiesPerSbfb.computeIfAbsent(streamsBuilderFactoryBean, k -> new ArrayList<>());
this.consumerPropertiesPerSbfb.get(streamsBuilderFactoryBean).add(consumerProperties);
}
public Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> getConsumerPropertiesPerSbfb() {
return this.consumerPropertiesPerSbfb;
}
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
return this.streamsBuilderFactoryBeanPerBinding;
}
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
this.outboundKStreamResolvables.put(key, outboundResolvable);
}
ResolvableType getOutboundKStreamResolvable(Object key) {
return outboundKStreamResolvables.get(key);
}
/**
* Adding a mapping for KStream target to its corresponding KeySerde.
* This is used for sending to DLQ when deserialization fails. See {@link KafkaStreamsMessageConversionDelegate}
* for details.
*
* @param kStreamTarget target KStream
* @param keySerde Serde used for the key
*/
void addKeySerde(KStream<?, ?> kStreamTarget, Serde<?> keySerde) {
this.keySerdeInfo.put(kStreamTarget, keySerde);
}
Serde<?> getKeySerde(KStream<?, ?> kStreamTarget) {
return this.keySerdeInfo.get(kStreamTarget);
}
Map<KStream<?, ?>, BindingProperties> getBindingProperties() {
return bindingProperties;
}
Map<KStream<?, ?>, KafkaStreamsConsumerProperties> getConsumerProperties() {
return consumerProperties;
}
void addBindingNamePerTarget(Object target, String bindingName) {
this.bindingNamesPerTarget.put(target, bindingName);
}
String bindingNamePerTarget(Object target) {
return this.bindingNamesPerTarget.get(target);
}
public List<ProducerFactory<byte[], byte[]>> getDlqProducerFactories() {
return this.dlqProducerFactories.values()
.stream()
.flatMap(List::stream)
.collect(Collectors.toList());
}
public List<ProducerFactory<byte[], byte[]>> getDlqProducerFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
return this.dlqProducerFactories.get(streamsBuilderFactoryBean);
}
public void addDlqProducerFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean,
ProducerFactory<byte[], byte[]> producerFactory) {
List<ProducerFactory<byte[], byte[]>> producerFactories = this.dlqProducerFactories.get(streamsBuilderFactoryBean);
if (CollectionUtils.isEmpty(producerFactories)) {
producerFactories = new ArrayList<>();
this.dlqProducerFactories.put(streamsBuilderFactoryBean, producerFactories);
}
producerFactories.add(producerFactory);
}
/**
* Caching the previous KafkaStreams for the applicaiton.id when binding is stopped through actuator.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @param applicationId application.id
* @param kafkaStreams {@link KafkaStreams} object
*/
public void addPreviousKafkaStreamsForApplicationId(String applicationId, KafkaStreams kafkaStreams) {
this.previousKafkaStreamsPerApplicationId.put(applicationId, kafkaStreams);
}
/**
* Remove the previously cached KafkaStreams object.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @param applicationId application.id
*/
public void removePreviousKafkaStreamsForApplicationId(String applicationId) {
this.previousKafkaStreamsPerApplicationId.remove(applicationId);
}
/**
* Get all stopped KafkaStreams objects through actuator binding stop.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @return stopped KafkaStreams objects map
*/
public Map<String, KafkaStreams> getStoppedKafkaStreams() {
return this.previousKafkaStreamsPerApplicationId;
}
}

View File

@@ -1,152 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.ObjectUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
/**
* Send records in error to a DLQ.
*
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
class KafkaStreamsDlqDispatch {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaTemplate<byte[], byte[]> kafkaTemplate;
private final String dlqName;
KafkaStreamsDlqDispatch(String dlqName,
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
kafkaConsumerProperties.getDlqProducerProperties()),
kafkaBinderConfigurationProperties);
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
this.dlqName = dlqName;
}
@SuppressWarnings("unchecked")
public void sendToDlq(byte[] key, byte[] value, int partittion) {
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(this.dlqName,
partittion, key, value, null);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value))).append("'")
.append(" received from ").append(partittion);
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = null;
try {
sentDlq = this.kafkaTemplate.send(producerRecord);
sentDlq.addCallback(
new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[], byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger
.debug("Sent to DLQ " + sb.toString());
}
}
});
}
catch (Exception ex) {
if (sentDlq == null) {
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
}
}
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
private String toDisplayString(String original) {
if (original.length() <= 50) {
return original;
}
return original.substring(0, 50) + "...";
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,69 +16,61 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.TreeSet;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.function.core.FluxedConsumer;
import org.springframework.cloud.function.core.FluxedFunction;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderProcessor implements BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionProcessor.class);
private static final String OUTBOUND = "outbound";
private final BindingServiceProperties bindingServiceProperties;
private final Map<String, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
@@ -86,14 +78,12 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final CleanupConfig cleanupConfig;
private final FunctionCatalog functionCatalog;
private final BindableProxyFactory bindableProxyFactory;
private ConfigurableApplicationContext applicationContext;
private Set<String> origInputs = new TreeSet<>();
private Set<String> origOutputs = new TreeSet<>();
private BeanFactory beanFactory;
private StreamFunctionProperties streamFunctionProperties;
private KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties;
StreamsBuilderFactoryBeanConfigurer customizer;
ConfigurableEnvironment environment;
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@@ -101,133 +91,397 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
CleanupConfig cleanupConfig,
FunctionCatalog functionCatalog,
BindableProxyFactory bindableProxyFactory) {
StreamFunctionProperties streamFunctionProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanConfigurer customizer, ConfigurableEnvironment environment) {
super(bindingServiceProperties, kafkaStreamsBindingInformationCatalogue, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.cleanupConfig = cleanupConfig;
this.functionCatalog = functionCatalog;
this.bindableProxyFactory = bindableProxyFactory;
this.origInputs.addAll(this.bindableProxyFactory.getInputs());
this.origOutputs.addAll(this.bindableProxyFactory.getOutputs());
this.streamFunctionProperties = streamFunctionProperties;
this.kafkaStreamsBinderConfigurationProperties = kafkaStreamsBinderConfigurationProperties;
this.customizer = customizer;
this.environment = environment;
}
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType) {
int inputCount = 1;
ResolvableType resolvableTypeGeneric = resolvableType.getGeneric(1);
while (resolvableTypeGeneric != null && resolvableTypeGeneric.getRawClass() != null && (resolvableTypeGeneric.getRawClass().equals(Function.class) ||
resolvableTypeGeneric.getRawClass().equals(Consumer.class))) {
inputCount++;
resolvableTypeGeneric = resolvableTypeGeneric.getGeneric(1);
}
final Set<String> inputs = new TreeSet<>(origInputs);
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory,
Method method, String functionName) {
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
final Iterator<String> iterator = inputs.iterator();
if (method != null) { // Component functional bean.
final ResolvableType firstMethodParameter = ResolvableType.forMethodParameter(method, 0);
ResolvableType currentOutputGeneric = ResolvableType.forMethodReturnType(method);
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
origInputs.remove(next);
final Set<String> inputs = new LinkedHashSet<>(kafkaStreamsBindableProxyFactory.getInputs());
final Iterator<String> iterator = inputs.iterator();
populateResolvableTypeMap(firstMethodParameter, resolvableTypeMap, iterator, method, functionName);
for (int i = 1; i < inputCount; i++) {
if (iterator.hasNext()) {
ResolvableType generic = resolvableType.getGeneric(1);
if (generic.getRawClass() != null &&
(generic.getRawClass().equals(Function.class) ||
generic.getRawClass().equals(Consumer.class))) {
final String next1 = iterator.next();
resolvableTypeMap.put(next1, generic.getGeneric(0));
origInputs.remove(next1);
}
final Class<?> outputRawclass = currentOutputGeneric.getRawClass();
traverseReturnTypeForComponentBeans(resolvableTypeMap, currentOutputGeneric, inputs, iterator, outputRawclass);
}
else if (resolvableType != null && resolvableType.getRawClass() != null) {
int inputCount = 1;
ResolvableType currentOutputGeneric;
if (resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class)) {
inputCount = 2;
currentOutputGeneric = resolvableType.getGeneric(2);
}
else {
currentOutputGeneric = resolvableType.getGeneric(1);
}
while (currentOutputGeneric.getRawClass() != null && functionOrConsumerFound(currentOutputGeneric)) {
inputCount++;
currentOutputGeneric = currentOutputGeneric.getGeneric(1);
}
final Set<String> inputs = new LinkedHashSet<>(kafkaStreamsBindableProxyFactory.getInputs());
final Iterator<String> iterator = inputs.iterator();
populateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
ResolvableType iterableResType = resolvableType;
int i = resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class) ? 2 : 1;
ResolvableType outboundResolvableType;
if (i == inputCount) {
outboundResolvableType = iterableResType.getGeneric(i);
}
else {
while (i < inputCount && iterator.hasNext()) {
iterableResType = iterableResType.getGeneric(1);
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
populateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
i++;
}
outboundResolvableType = iterableResType.getGeneric(1);
}
resolvableTypeMap.put(OUTBOUND, outboundResolvableType);
}
return resolvableTypeMap;
}
private void traverseReturnTypeForComponentBeans(Map<String, ResolvableType> resolvableTypeMap, ResolvableType currentOutputGeneric,
Set<String> inputs, Iterator<String> iterator, Class<?> outputRawclass) {
if (outputRawclass != null && !outputRawclass.equals(Void.TYPE)) {
ResolvableType iterableResType = currentOutputGeneric;
int i = 1;
// Traverse through the return signature.
while (i < inputs.size() && iterator.hasNext()) {
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
populateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
iterableResType = iterableResType.getGeneric(1);
i++;
}
if (iterableResType.getRawClass() != null && KStream.class.isAssignableFrom(iterableResType.getRawClass())) {
resolvableTypeMap.put(OUTBOUND, iterableResType);
}
}
}
private boolean functionOrConsumerFound(ResolvableType iterableResType) {
return iterableResType.getRawClass().equals(Function.class) ||
iterableResType.getRawClass().equals(Consumer.class);
}
private void populateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
Iterator<String> iterator) {
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
if (resolvableType.getRawClass() != null &&
(resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class))
&& iterator.hasNext()) {
resolvableTypeMap.put(iterator.next(), resolvableType.getGeneric(1));
}
}
private void populateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
Iterator<String> iterator, Method method, String functionName) {
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType);
if (method != null) {
final Object bean = beanFactory.getBean(functionName);
if (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass())) {
resolvableTypeMap.put(iterator.next(), ResolvableType.forMethodParameter(method, 1));
}
}
}
private ResolvableType checkOutboundForComposedFunctions(
ResolvableType outputResolvableType) {
ResolvableType currentOutputGeneric;
if (outputResolvableType.getRawClass() != null && outputResolvableType.getRawClass().isAssignableFrom(BiFunction.class)) {
currentOutputGeneric = outputResolvableType.getGeneric(2);
}
else {
currentOutputGeneric = outputResolvableType.getGeneric(1);
}
while (currentOutputGeneric.getRawClass() != null && functionOrConsumerFound(currentOutputGeneric)) {
currentOutputGeneric = currentOutputGeneric.getGeneric(1);
}
return currentOutputGeneric;
}
/**
* This method must be kept stateless. In the case of multiple function beans in an application,
* isolated {@link KafkaStreamsBindableProxyFactory} instances are passed in separately for those functions. If the
* state is shared between invocations, that will create potential race conditions. Hence, invocations of this method
* should not be dependent on state modified by a previous invocation.
*
* @param resolvableType type of the binding
* @param functionName bean name of the function
* @param kafkaStreamsBindableProxyFactory bindable proxy factory for the Kafka Streams type
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
public void orchestrateFunctionInvoking(ResolvableType resolvableType, String functionName) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory, Method method,
ResolvableType outputResolvableType,
String... composedFunctionNames) {
final Map<String, ResolvableType> resolvableTypes = buildTypeMap(resolvableType,
kafkaStreamsBindableProxyFactory, method, functionName);
ResolvableType outboundResolvableType;
if (outputResolvableType != null) {
outboundResolvableType = checkOutboundForComposedFunctions(outputResolvableType);
resolvableTypes.remove(OUTBOUND);
}
else {
outboundResolvableType = resolvableTypes.remove(OUTBOUND);
}
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(resolvableTypes, functionName);
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
FluxedConsumer fluxedConsumer = functionCatalog.lookup(FluxedConsumer.class, functionName);
Assert.isTrue(fluxedConsumer != null,
"No corresponding consumer beans found in the catalog");
Object target = fluxedConsumer.getTarget();
Consumer<Object> consumer = Consumer.class.isAssignableFrom(target.getClass()) ? (Consumer) target : null;
if (consumer != null) {
consumer.accept(adaptedInboundArguments[0]);
Consumer<Object> consumer = (Consumer) this.beanFactory.getBean(functionName);
consumer.accept(adaptedInboundArguments[0]);
}
else if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiConsumer.class)) {
BiConsumer<Object, Object> biConsumer = (BiConsumer) this.beanFactory.getBean(functionName);
biConsumer.accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else if (method != null) { // Handling component functional beans
final Object bean = beanFactory.getBean(functionName);
if (Consumer.class.isAssignableFrom(bean.getClass())) {
((Consumer) bean).accept(adaptedInboundArguments[0]);
}
else if (BiConsumer.class.isAssignableFrom(bean.getClass())) {
((BiConsumer) bean).accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else if (Function.class.isAssignableFrom(bean.getClass()) || BiFunction.class.isAssignableFrom(bean.getClass())) {
Object result;
if (BiFunction.class.isAssignableFrom(bean.getClass())) {
result = ((BiFunction) bean).apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
result = ((Function) bean).apply(adaptedInboundArguments[0]);
}
result = handleCurriedFunctions(adaptedInboundArguments, result);
if (result != null) {
final Set<String> outputs = new TreeSet<>(kafkaStreamsBindableProxyFactory.getOutputs());
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
if (result.getClass().isArray()) {
final String initialInput = resolvableTypes.keySet().iterator().next();
final StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeanPerBinding().get(initialInput);
handleKStreamArrayOutbound(resolvableType, functionName, kafkaStreamsBindableProxyFactory,
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
}
else {
if (KTable.class.isAssignableFrom(result.getClass())) {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), ((KTable) result).toStream(), outboundDefinitionIterator);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType, (KStream) result, outboundDefinitionIterator);
}
}
}
}
}
else {
Function<Object, Object> function = functionCatalog.lookup(Function.class, functionName);
Object target = null;
if (function instanceof FluxedFunction) {
target = ((FluxedFunction) function).getTarget();
}
function = (Function) target;
Object result = function.apply(adaptedInboundArguments[0]);
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
result = ((Function) result).apply(adaptedInboundArguments[i]);
Object result = null;
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiFunction.class)) {
if (composedFunctionNames != null && composedFunctionNames.length > 0) {
result = handleComposedFunctions(adaptedInboundArguments, result, composedFunctionNames);
}
else {
((Consumer) result).accept(adaptedInboundArguments[i]);
result = null;
BiFunction<Object, Object, Object> biFunction = (BiFunction) beanFactory.getBean(functionName);
result = biFunction.apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
result = handleCurriedFunctions(adaptedInboundArguments, result);
}
}
else {
if (composedFunctionNames != null && composedFunctionNames.length > 0) {
result = handleComposedFunctions(adaptedInboundArguments, result, composedFunctionNames);
}
else {
Function<Object, Object> function = (Function) beanFactory.getBean(functionName);
result = function.apply(adaptedInboundArguments[0]);
result = handleCurriedFunctions(adaptedInboundArguments, result);
}
i++;
}
if (result != null) {
final Set<String> outputs = new TreeSet<>(origOutputs);
final Iterator<String> iterator = outputs.iterator();
final Set<String> outputs = new TreeSet<>(kafkaStreamsBindableProxyFactory.getOutputs());
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
if (result.getClass().isArray()) {
final int length = ((Object[]) result).length;
String[] methodAnnotatedOutboundNames = new String[length];
for (int j = 0; j < length; j++) {
if (iterator.hasNext()) {
final String next = iterator.next();
methodAnnotatedOutboundNames[j] = next;
this.origOutputs.remove(next);
}
}
Object[] outboundKStreams = (Object[]) result;
int k = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[k++]);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) outboundKStream);
}
final String initialInput = resolvableTypes.keySet().iterator().next();
final StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeanPerBinding().get(initialInput);
handleKStreamArrayOutbound(resolvableType, functionName, kafkaStreamsBindableProxyFactory,
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
}
else {
if (iterator.hasNext()) {
final String next = iterator.next();
Object targetBean = this.applicationContext.getBean(next);
this.origOutputs.remove(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
if (KTable.class.isAssignableFrom(result.getClass())) {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), ((KTable) result).toStream(), outboundDefinitionIterator);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), (KStream) result, outboundDefinitionIterator);
}
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException("Cannot setup StreamListener for foobar", ex);
throw new BeanInitializationException("Cannot setup function invoker for this Kafka Streams function.", ex);
}
}
@SuppressWarnings({"unchecked"})
private Object handleComposedFunctions(Object[] adaptedInboundArguments, Object result, String... composedFunctionNames) {
Object bean = beanFactory.getBean(composedFunctionNames[0]);
if (BiFunction.class.isAssignableFrom(bean.getClass())) {
result = ((BiFunction<Object, Object, Object>) bean).apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else if (Function.class.isAssignableFrom(bean.getClass())) {
result = ((Function<Object, Object>) bean).apply(adaptedInboundArguments[0]);
}
// If the return is a curried function, apply it
result = handleCurriedFunctions(adaptedInboundArguments, result);
// Apply composed functions
return applyComposedFunctions(result, composedFunctionNames);
}
@SuppressWarnings({"unchecked"})
private Object applyComposedFunctions(Object result, String[] composedFunctionNames) {
for (int i = 1; i < composedFunctionNames.length; i++) {
final Object bean = beanFactory.getBean(composedFunctionNames[i]);
if (Consumer.class.isAssignableFrom(bean.getClass())) {
((Consumer<Object>) bean).accept(result);
result = null;
}
else if (Function.class.isAssignableFrom(bean.getClass())) {
result = ((Function<Object, Object>) bean).apply(result);
}
else {
throw new IllegalStateException("You can only compose functions of type either java.util.function.Function or java.util.function.Consumer.");
}
}
return result;
}
@SuppressWarnings({"unchecked"})
private Object handleCurriedFunctions(Object[] adaptedInboundArguments, Object result) {
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
result = ((Function<Object, Object>) result).apply(adaptedInboundArguments[i]);
}
else {
((Consumer<Object>) result).accept(adaptedInboundArguments[i]);
result = null;
}
i++;
}
return result;
}
private void handleSingleKStreamOutbound(Map<String, ResolvableType> resolvableTypes, ResolvableType outboundResolvableType,
KStream<Object, Object> result, Iterator<String> outboundDefinitionIterator) {
if (outboundDefinitionIterator.hasNext()) {
String outbound = outboundDefinitionIterator.next();
Object targetBean = handleSingleKStreamOutbound(result, outbound);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean,
outboundResolvableType);
final String next = resolvableTypes.keySet().iterator().next();
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeanPerBinding().get(next);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(outbound, streamsBuilderFactoryBean);
}
}
private Object handleSingleKStreamOutbound(KStream<Object, Object> result, String next) {
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap(result);
return targetBean;
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void handleKStreamArrayOutbound(ResolvableType resolvableType, String functionName,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory,
ResolvableType outboundResolvableType, Object[] result,
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
// Binding target as the output bindings were deferred in the KafkaStreamsBindableProxyFactory
// due to the fact that it didn't know the returned array size. At this point in the execution,
// we know exactly the number of outbound components (from the array length), so do the binding.
final int length = result.length;
List<String> outputBindings = getOutputBindings(functionName, length);
Iterator<String> iterator = outputBindings.iterator();
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
for (Object o : result) {
String next = iterator.next();
kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
registry.registerBeanDefinition(next, rootBeanDefinition1);
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) o);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(next, streamsBuilderFactoryBean);
}
}
private List<String> getOutputBindings(String functionName, int outputs) {
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings(functionName);
List<String> outputBindingNames = new ArrayList<>();
if (!CollectionUtils.isEmpty(outputBindings)) {
outputBindingNames.addAll(outputBindings);
return outputBindingNames;
}
else {
for (int i = 0; i < outputs; i++) {
outputBindingNames.add(String.format("%s-%s-%d", functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX, i));
}
}
return outputBindingNames;
}
@SuppressWarnings({"unchecked"})
private Object[] adaptAndRetrieveInboundArguments(Map<String, ResolvableType> stringResolvableTypeMap,
String functionName) {
@@ -237,60 +491,52 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
Class<?> parameterType = stringResolvableTypeMap.get(input).getRawClass();
if (input != null) {
Assert.isInstanceOf(String.class, input, "Annotation value must be a String");
Object targetBean = applicationContext.getBean(input);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(input);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext, input);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext,
input, kafkaStreamsBinderConfigurationProperties, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.methodStreamsBuilderFactoryBeanMap.get(functionName);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties =
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
extendedConsumerProperties.setApplicationId(applicationId);
//get state store spec
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding " + input + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties, stringResolvableTypeMap.get(input));
LOG.info("Key Serde used for " + input + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(input).isUseNativeDecoding() ?
getValueSerde(input, extendedConsumerProperties, stringResolvableTypeMap.get(input)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + input + ": " + valueSerde.getClass().getName());
final Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(input, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(input, bindingProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset);
KStream<?, ?> stream = getKStream(input, bindingProperties, extendedConsumerProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset, i == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper =
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =
(stringResolvableTypeMap.get(input).getGeneric(1).getRawClass() != null)
? (stringResolvableTypeMap.get(input).getGeneric(1).getRawClass()) : Object.class;
if (this.kafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(
(KStream) kStreamWrapper)) {
(KStream<?, ?>) kStreamWrapper)) {
arguments[i] = stream;
}
else {
@@ -302,31 +548,11 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
if (arguments[i] == null) {
arguments[i] = stream;
}
Assert.notNull(arguments[i], "problems..");
Assert.notNull(arguments[i], "Problems encountered while adapting the function argument.");
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
else {
handleKTableGlobalKTableInputs(arguments, i, input, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset, i == 0);
}
i++;
}
@@ -335,172 +561,14 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
//throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde,
String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAs(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde,
autoOffsetReset) :
streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination,
String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder,
String destination, String storeName,
Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName,
Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
}
private KStream<?, ?> getkStream(String inboundName,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream =
streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties.getConsumerProperties(inboundName)
.isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName + ". " +
"Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName + ". " +
"Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(String functionName, ApplicationContext applicationContext,
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties",
Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName).getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext.getBean(
"kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration =
new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS, kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration, this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" +
functionName, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" +
functionName, StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
}

View File

@@ -0,0 +1,57 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.io.IOException;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.cloud.stream.binder.kafka.properties.JaasLoginModuleConfiguration;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.security.jaas.KafkaJaasLoginModuleInitializer;
/**
* Jaas configuration bean for Kafka Streams binder types.
*
* @author Soby Chacko
* @since 3.1.4
*/
@Configuration
public class KafkaStreamsJaasConfiguration {
@Bean
@ConditionalOnMissingBean(KafkaJaasLoginModuleInitializer.class)
public KafkaJaasLoginModuleInitializer jaasInitializer(
KafkaBinderConfigurationProperties configurationProperties)
throws IOException {
KafkaJaasLoginModuleInitializer kafkaJaasLoginModuleInitializer = new KafkaJaasLoginModuleInitializer();
JaasLoginModuleConfiguration jaas = configurationProperties.getJaas();
if (jaas != null) {
kafkaJaasLoginModuleInitializer.setLoginModule(jaas.getLoginModule());
KafkaJaasLoginModuleInitializer.ControlFlag controlFlag = jaas
.getControlFlag();
if (controlFlag != null) {
kafkaJaasLoginModuleInitializer.setControlFlag(controlFlag);
}
kafkaJaasLoginModuleInitializer.setOptions(jaas.getOptions());
}
return kafkaJaasLoginModuleInitializer;
}
}

View File

@@ -22,18 +22,21 @@ import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
@@ -55,7 +58,7 @@ public class KafkaStreamsMessageConversionDelegate {
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
private final CompositeMessageConverterFactory compositeMessageConverterFactory;
private final CompositeMessageConverter compositeMessageConverter;
private final SendToDlqAndContinue sendToDlqAndContinue;
@@ -63,12 +66,14 @@ public class KafkaStreamsMessageConversionDelegate {
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
Exception[] failedWithDeserException = new Exception[1];
KafkaStreamsMessageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
this.compositeMessageConverter = compositeMessageConverter;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
this.kstreamBinderConfigurationProperties = kstreamBinderConfigurationProperties;
@@ -83,22 +88,23 @@ public class KafkaStreamsMessageConversionDelegate {
public KStream serializeOnOutbound(KStream<?, ?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue
.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget
.filter((k, v) -> v != null)
.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
kStreamWithEnrichedHeaders.process(() -> new Processor() {
@@ -146,8 +152,7 @@ public class KafkaStreamsMessageConversionDelegate {
@SuppressWarnings({ "unchecked", "rawtypes" })
public KStream deserializeOnInbound(Class<?> valueClass,
KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
resolvePerRecordContentType(bindingTarget, perRecordContentTypeHolder);
@@ -200,6 +205,7 @@ public class KafkaStreamsMessageConversionDelegate {
"Deserialization has failed. This will be skipped from further processing.",
e);
// pass through
failedWithDeserException[0] = e;
}
return isValidRecord;
},
@@ -207,7 +213,7 @@ public class KafkaStreamsMessageConversionDelegate {
// in the first filter above.
(k, v) -> true);
// process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1]);
processErrorFromDeserialization(bindingTarget, branch[1], failedWithDeserException);
// first branch above is the branch where the messages are converted, let it go
// through further processing.
@@ -264,7 +270,7 @@ public class KafkaStreamsMessageConversionDelegate {
@SuppressWarnings({ "unchecked", "rawtypes" })
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget,
KStream<?, ?> branch) {
KStream<?, ?> branch, Exception[] exception) {
branch.process(() -> new Processor() {
ProcessorContext context;
@@ -279,18 +285,25 @@ public class KafkaStreamsMessageConversionDelegate {
if (o2 != null) {
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue
.isDlqEnabled(bindingTarget)) {
String destination = this.context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
// We need to convert the key to a byte[] before sending to DLQ.
Serde keySerde = kstreamBindingInformationCatalogue.getKeySerde(bindingTarget);
Serializer keySerializer = keySerde.serializer();
byte[] keyBytes = keySerializer.serialize(null, o);
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
keyBytes, message.getPayload());
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o,
(byte[]) message.getPayload(),
this.context.partition());
.sendToDlq(consumerRecord, exception[0]);
}
else {
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
o, o2);
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o, (byte[]) o2,
this.context.partition());
.sendToDlq(consumerRecord, exception[0]);
}
}
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties

View File

@@ -16,31 +16,78 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* An internal registry for holding {@KafkaStreams} objects maintained through
* An internal registry for holding {@link KafkaStreams} objects maintained through
* {@link StreamsBuilderFactoryManager}.
*
* @author Soby Chacko
*/
class KafkaStreamsRegistry {
public class KafkaStreamsRegistry {
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
private final Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new ConcurrentHashMap<>();
private final Set<KafkaStreams> kafkaStreams = ConcurrentHashMap.newKeySet();
Set<KafkaStreams> getKafkaStreams() {
return this.kafkaStreams;
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();
for (KafkaStreams ks : this.kafkaStreams) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = streamsBuilderFactoryBeanMap.get(ks);
if (streamsBuilderFactoryBean.isRunning()) {
currentlyRunningKafkaStreams.add(ks);
}
}
return currentlyRunningKafkaStreams;
}
/**
* Register the {@link KafkaStreams} object created in the application.
* @param kafkaStreams {@link KafkaStreams} object created in the application
* @param streamsBuilderFactoryBean {@link StreamsBuilderFactoryBean}
*/
void registerKafkaStreams(KafkaStreams kafkaStreams) {
void registerKafkaStreams(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
this.kafkaStreams.add(kafkaStreams);
this.streamsBuilderFactoryBeanMap.put(kafkaStreams, streamsBuilderFactoryBean);
}
void unregisterKafkaStreams(KafkaStreams kafkaStreams) {
this.kafkaStreams.remove(kafkaStreams);
this.streamsBuilderFactoryBeanMap.remove(kafkaStreams);
}
/**
*
* @param kafkaStreams {@link KafkaStreams} object
* @return Corresponding {@link StreamsBuilderFactoryBean}.
*/
StreamsBuilderFactoryBean streamBuilderFactoryBean(KafkaStreams kafkaStreams) {
return this.streamsBuilderFactoryBeanMap.get(kafkaStreams);
}
public StreamsBuilderFactoryBean streamsBuilderFactoryBean(String applicationId) {
final Optional<StreamsBuilderFactoryBean> first = this.streamsBuilderFactoryBeanMap.values()
.stream()
.filter(streamsBuilderFactoryBean -> streamsBuilderFactoryBean.isRunning() && streamsBuilderFactoryBean
.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG)
.equals(applicationId))
.findFirst();
return first.orElse(null);
}
public List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans() {
return new ArrayList<>(this.streamsBuilderFactoryBeanMap.values());
}
}

View File

@@ -1,687 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.binding.StreamListenerSetupMethodOrchestrator;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
import org.springframework.util.StringUtils;
/**
* Kafka Streams specific implementation for {@link StreamListenerSetupMethodOrchestrator}
* that overrides the default mechanisms for invoking StreamListener adapters.
* <p>
* The orchestration primarily focus on the following areas:
* <p>
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one
* output values on {@link SendTo} 2. Allow multiple inbound bindings for multiple KStream
* and or KTable/GlobalKTable types. 3. Each StreamListener method that it orchestrates
* gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
*
* @author Soby Chacko
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator
implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
private static final Log LOG = LogFactory
.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private final StreamListenerParameterAdapter streamListenerParameterAdapter;
private final Collection<StreamListenerResultAdapter> streamListenerResultAdapters;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final Map<Method, List<String>> registeredStoresPerMethod = new HashMap<>();
private final CleanupConfig cleanupConfig;
private ConfigurableApplicationContext applicationContext;
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = listenerResultAdapters;
this.cleanupConfig = cleanupConfig;
}
@Override
public boolean supports(Method method) {
return methodParameterSupports(method) && (methodReturnTypeSuppports(method)
|| Void.TYPE.equals(method.getReturnType()));
}
private boolean methodReturnTypeSuppports(Method method) {
Class<?> returnType = method.getReturnType();
if (returnType.equals(KStream.class) || (returnType.isArray()
&& returnType.getComponentType().equals(KStream.class))) {
return true;
}
return false;
}
private boolean methodParameterSupports(Method method) {
boolean supports = false;
for (int i = 0; i < method.getParameterCount(); i++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, i);
Class<?> parameterType = methodParameter.getParameterType();
if (parameterType.equals(KStream.class) || parameterType.equals(KTable.class)
|| parameterType.equals(GlobalKTable.class)) {
supports = true;
}
}
return supports;
}
@Override
@SuppressWarnings({"rawtypes", "unchecked"})
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener,
Method method, Object bean) {
String[] methodAnnotatedOutboundNames = getOutboundBindingTargetNames(method);
validateStreamListenerMethod(streamListener, method,
methodAnnotatedOutboundNames);
String methodAnnotatedInboundName = streamListener.value();
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method,
methodAnnotatedInboundName, this.applicationContext,
this.streamListenerParameterAdapter);
try {
ReflectionUtils.makeAccessible(method);
if (Void.TYPE.equals(method.getReturnType())) {
method.invoke(bean, adaptedInboundArguments);
}
else {
Object result = method.invoke(bean, adaptedInboundArguments);
if (result.getClass().isArray()) {
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[i++]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
}
}
else {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(result.getClass(),
targetBean.getClass())) {
streamListenerResultAdapter.adapt(result, targetBean);
break;
}
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException(
"Cannot setup StreamListener for " + method, ex);
}
}
@Override
@SuppressWarnings({"unchecked"})
public Object[] adaptAndRetrieveInboundArguments(Method method, String inboundName,
ApplicationContext applicationContext,
StreamListenerParameterAdapter... adapters) {
Object[] arguments = new Object[method.getParameterTypes().length];
for (int parameterIndex = 0; parameterIndex < arguments.length; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Class<?> parameterType = methodParameter.getParameterType();
Object targetReferenceValue = null;
if (methodParameter.hasParameterAnnotation(Input.class)) {
targetReferenceValue = AnnotationUtils
.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter
.getParameterAnnotation(Input.class);
inboundName = methodAnnotation.value();
}
else if (arguments.length == 1 && StringUtils.hasText(inboundName)) {
targetReferenceValue = inboundName;
}
if (targetReferenceValue != null) {
Assert.isInstanceOf(String.class, targetReferenceValue,
"Annotation value must be a String");
Object targetBean = applicationContext
.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
// Retrieve the StreamsConfig created for this method if available.
// Otherwise, create the StreamsBuilderFactory and get the underlying
// config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
buildStreamsBuilderAndRetrieveConfig(method, applicationContext,
inboundName);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
.get(method);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec,
bindingProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(kStreamWrapper, methodParameter);
break;
}
}
if (arguments[parameterIndex] == null
&& parameterType.isAssignableFrom(stream.getClass())) {
arguments[parameterIndex] = stream;
}
Assert.notNull(arguments[parameterIndex],
"Cannot convert argument " + parameterIndex + " of "
+ method + "from " + stream.getClass() + " to "
+ parameterType);
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde,
valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper = (KTableBoundElementFactory.KTableWrapper) targetBean;
// wrap the proxy created during the initial target type binding
// with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder,
keySerde, valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper = (GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
// @checkstyle:on
// wrap the proxy created during the initial target type binding
// with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
else {
throw new IllegalStateException(
StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder,
String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
try {
Serde<?> keySerde = this.keyValueSerdeResolver
.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver
.getStateStoreValueSerde(spec.getValueSerdeString());
StoreBuilder builder;
switch (spec.getType()) {
case KEYVALUE:
builder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(spec.getName()), keySerde,
valueSerde);
break;
case WINDOW:
builder = Stores
.windowStoreBuilder(
Stores.persistentWindowStore(spec.getName(),
spec.getRetention(), 3, spec.getLength(), false),
keySerde, valueSerde);
break;
case SESSION:
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(
spec.getName(), spec.getRetention()), keySerde, valueSerde);
break;
default:
throw new UnsupportedOperationException(
"state store type (" + spec.getType() + ") is not supported!");
}
if (spec.isCacheEnabled()) {
builder = builder.withCachingEnabled();
}
if (spec.isLoggingDisabled()) {
builder = builder.withLoggingDisabled();
}
return builder;
}
catch (Exception ex) {
LOG.error("failed to build state store exception : " + ex);
throw ex;
}
}
private KStream<?, ?> getkStream(String inboundName,
KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde,
Topology.AutoOffsetReset autoOffsetReset) {
if (storeSpec != null) {
StoreBuilder storeBuilder = buildStateStore(storeSpec);
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
}
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType,
BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class)
|| parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
// No framework level message conversion provided for KTable/GlobalKTable, its
// done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(Method method,
ApplicationContext applicationContext, String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
.getBean("kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(
streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(),
() -> streamsBuilder)
.getRawBeanDefinition();
final String beanNamePostFix = method.getDeclaringClass().getSimpleName() + "-" + method.getName();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostFix, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean(
"&stream-builder-" + beanNamePostFix, StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
private void validateStreamListenerMethod(StreamListener streamListener,
Method method, String[] methodAnnotatedOutboundNames) {
String methodAnnotatedInboundName = streamListener.value();
if (methodAnnotatedOutboundNames != null) {
for (String s : methodAnnotatedOutboundNames) {
if (StringUtils.hasText(s)) {
Assert.isTrue(isDeclarativeOutput(method, s),
"Method must be declarative");
}
}
}
if (StringUtils.hasText(methodAnnotatedInboundName)) {
int methodArgumentsLength = method.getParameterTypes().length;
for (int parameterIndex = 0; parameterIndex < methodArgumentsLength; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Assert.isTrue(
isDeclarativeInput(methodAnnotatedInboundName, methodParameter),
"Method must be declarative");
}
}
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeOutput(Method m, String targetBeanName) {
boolean declarative;
Class<?> returnType = m.getReturnType();
if (returnType.isArray()) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch((slpa) -> slpa.supports(returnType.getComponentType(),
targetBeanClass));
return declarative;
}
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch((slpa) -> slpa.supports(returnType, targetBeanClass));
return declarative;
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeInput(String targetBeanName,
MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class)
&& this.applicationContext.containsBean(targetBeanName)) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
if (targetBeanClass != null) {
boolean supports = KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = KTable.class.isAssignableFrom(targetBeanClass)
&& KTable.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = GlobalKTable.class.isAssignableFrom(targetBeanClass)
&& GlobalKTable.class.isAssignableFrom(methodParameter.getParameterType());
}
}
return supports;
}
}
return false;
}
private static String[] getOutboundBindingTargetNames(Method method) {
SendTo sendTo = AnnotationUtils.findAnnotation(method, SendTo.class);
if (sendTo != null) {
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()),
StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1,
"At least one outbound destination need to be provided.");
return sendTo.value();
}
return null;
}
@SuppressWarnings({"unchecked"})
private KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
if (!this.registeredStoresPerMethod.containsKey(method)) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method,
KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
this.registeredStoresPerMethod.put(method, new ArrayList<>());
this.registeredStoresPerMethod.get(method).add(spec.name());
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
}
}
return null;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,17 +16,35 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.util.ClassUtils;
import org.springframework.util.StringUtils;
/**
@@ -52,13 +70,18 @@ import org.springframework.util.StringUtils;
*
* @author Soby Chacko
* @author Lei Chen
* @author Eduard Domínguez
*/
public class KeyValueSerdeResolver {
public class KeyValueSerdeResolver implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(KeyValueSerdeResolver.class);
private final Map<String, Object> streamConfigGlobalProperties;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private ConfigurableApplicationContext context;
KeyValueSerdeResolver(Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.streamConfigGlobalProperties = streamConfigGlobalProperties;
@@ -75,7 +98,14 @@ public class KeyValueSerdeResolver {
KafkaStreamsConsumerProperties extendedConsumerProperties) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString);
return getKeySerde(keySerdeString, extendedConsumerProperties.getConfiguration());
}
public Serde<?> getInboundKeySerde(
KafkaStreamsConsumerProperties extendedConsumerProperties, ResolvableType resolvableType) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString, resolvableType, extendedConsumerProperties.getConfiguration());
}
/**
@@ -92,12 +122,31 @@ public class KeyValueSerdeResolver {
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString);
valueSerde = getValueSerde(valueSerdeString, extendedConsumerProperties.getConfiguration());
}
else {
valueSerde = Serdes.ByteArray();
}
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties,
KafkaStreamsConsumerProperties extendedConsumerProperties,
ResolvableType resolvableType) {
Serde<?> valueSerde;
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString, resolvableType, extendedConsumerProperties.getConfiguration());
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -111,9 +160,14 @@ public class KeyValueSerdeResolver {
* @return configurd {@link Serde} for the outbound key.
*/
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties) {
return getKeySerde(properties.getKeySerde());
return getKeySerde(properties.getKeySerde(), properties.getConfiguration());
}
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties, ResolvableType resolvableType) {
return getKeySerde(properties.getKeySerde(), resolvableType, properties.getConfiguration());
}
/**
* Provide the {@link Serde} for outbound value.
* @param producerProperties {@link ProducerProperties} on binding
@@ -127,12 +181,29 @@ public class KeyValueSerdeResolver {
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde());
kafkaStreamsProducerProperties.getValueSerde(), kafkaStreamsProducerProperties.getConfiguration());
}
else {
valueSerde = Serdes.ByteArray();
}
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties,
KafkaStreamsProducerProperties kafkaStreamsProducerProperties, ResolvableType resolvableType) {
Serde<?> valueSerde;
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde(), resolvableType, kafkaStreamsProducerProperties.getConfiguration());
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -146,7 +217,7 @@ public class KeyValueSerdeResolver {
* @return {@link Serde} for the state store key.
*/
public Serde<?> getStateStoreKeySerde(String keySerdeString) {
return getKeySerde(keySerdeString);
return getKeySerde(keySerdeString, (Map<String, ?>) null);
}
/**
@@ -156,29 +227,23 @@ public class KeyValueSerdeResolver {
*/
public Serde<?> getStateStoreValueSerde(String valueSerdeString) {
try {
return getValueSerde(valueSerdeString);
return getValueSerde(valueSerdeString, (Map<String, ?>) null);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
}
private Serde<?> getKeySerde(String keySerdeString) {
private Serde<?> getKeySerde(String keySerdeString, Map<String, ?> extendedConfiguration) {
Serde<?> keySerde;
try {
if (StringUtils.hasText(keySerdeString)) {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
keySerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.key.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.key.serde"),
Serde.class)
: Serdes.ByteArray();
keySerde = getFallbackSerde("default.key.serde");
}
keySerde.configure(this.streamConfigGlobalProperties, true);
keySerde.configure(combineStreamConfigProperties(extendedConfiguration), true);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -186,21 +251,195 @@ public class KeyValueSerdeResolver {
return keySerde;
}
private Serde<?> getValueSerde(String valueSerdeString)
private Serde<?> getKeySerde(String keySerdeString, ResolvableType resolvableType, Map<String, ?> extendedConfiguration) {
Serde<?> keySerde = null;
try {
if (StringUtils.hasText(keySerdeString)) {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
if (resolvableType != null &&
(isResolvalbeKafkaStreamsType(resolvableType) || isResolvableKStreamArrayType(resolvableType))) {
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(0) : resolvableType.getGeneric(0);
Serde<?> fallbackSerde = getFallbackSerde("default.key.serde");
keySerde = getSerde(generic, fallbackSerde);
}
if (keySerde == null) {
keySerde = Serdes.ByteArray();
}
}
keySerde.configure(combineStreamConfigProperties(extendedConfiguration), true);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return keySerde;
}
private boolean isResolvableKStreamArrayType(ResolvableType resolvableType) {
return resolvableType.isArray() &&
KStream.class.isAssignableFrom(resolvableType.getComponentType().getRawClass());
}
private boolean isResolvalbeKafkaStreamsType(ResolvableType resolvableType) {
return resolvableType.getRawClass() != null && (KStream.class.isAssignableFrom(resolvableType.getRawClass()) || KTable.class.isAssignableFrom(resolvableType.getRawClass()) ||
GlobalKTable.class.isAssignableFrom(resolvableType.getRawClass()));
}
private Serde<?> getSerde(ResolvableType generic, Serde<?> fallbackSerde) {
Serde<?> serde = null;
Map<String, Serde> beansOfType = context.getBeansOfType(Serde.class);
Serde<?>[] serdeBeans = new Serde<?>[1];
final Class<?> genericRawClazz = generic.getRawClass();
beansOfType.forEach((k, v) -> {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(k))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method[] methods = classObj.getMethods();
Optional<Method> serdeBeanMethod = Arrays.stream(methods).filter(m -> m.getName().equals(k)).findFirst();
if (serdeBeanMethod.isPresent()) {
Method method = serdeBeanMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
ResolvableType serdeBeanGeneric = resolvableType.getGeneric(0);
Class<?> serdeGenericRawClazz = serdeBeanGeneric.getRawClass();
if (serdeGenericRawClazz != null && genericRawClazz != null) {
if (serdeGenericRawClazz.isAssignableFrom(genericRawClazz)) {
serdeBeans[0] = v;
}
}
}
}
catch (Exception e) {
// Pass through...
}
});
if (serdeBeans[0] != null) {
return serdeBeans[0];
}
if (genericRawClazz != null) {
if (Integer.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Integer();
}
else if (Long.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Long();
}
else if (Short.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Short();
}
else if (Double.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Double();
}
else if (Float.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Float();
}
else if (byte[].class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.ByteArray();
}
else if (String.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.String();
}
else if (UUID.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.UUID();
}
else if (!isSerdeFromStandardDefaults(fallbackSerde)) {
//User purposely set a default serde that is not one of the above
serde = fallbackSerde;
}
else {
// If the type is Object, then skip assigning the JsonSerde and let the fallback mechanism takes precedence.
if (!genericRawClazz.isAssignableFrom((Object.class))) {
serde = new JsonSerde(genericRawClazz);
}
}
}
return serde;
}
private boolean isSerdeFromStandardDefaults(Serde<?> serde) {
if (serde != null) {
if (Number.class.isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.ByteArray().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.String().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.UUID().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
}
return false;
}
private Serde<?> getValueSerde(String valueSerdeString, Map<String, ?> extendedConfiguration)
throws ClassNotFoundException {
Serde<?> valueSerde;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
valueSerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.value.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.value.serde"),
Serde.class)
: Serdes.ByteArray();
valueSerde = getFallbackSerde("default.value.serde");
}
valueSerde.configure(combineStreamConfigProperties(extendedConfiguration), false);
return valueSerde;
}
private Serde<?> getFallbackSerde(String s) throws ClassNotFoundException {
return this.binderConfigurationProperties.getConfiguration()
.containsKey(s)
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get(s),
Serde.class)
: Serdes.ByteArray();
}
@SuppressWarnings("unchecked")
private Serde<?> getValueSerde(String valueSerdeString, ResolvableType resolvableType, Map<String, ?> extendedConfiguration)
throws ClassNotFoundException {
Serde<?> valueSerde = null;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
if (resolvableType != null && ((isResolvalbeKafkaStreamsType(resolvableType)) ||
(isResolvableKStreamArrayType(resolvableType)))) {
Serde<?> fallbackSerde = getFallbackSerde("default.value.serde");
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(1) : resolvableType.getGeneric(1);
valueSerde = getSerde(generic, fallbackSerde);
}
if (valueSerde == null) {
valueSerde = Serdes.ByteArray();
}
}
valueSerde.configure(combineStreamConfigProperties(extendedConfiguration), false);
return valueSerde;
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
context = (ConfigurableApplicationContext) applicationContext;
}
private Map<String, ?> combineStreamConfigProperties(Map<String, ?> extendedConfiguration) {
if (extendedConfiguration != null && !extendedConfiguration.isEmpty()) {
Map<String, Object> streamConfiguration = new HashMap(this.streamConfigGlobalProperties);
streamConfiguration.putAll(extendedConfiguration);
return streamConfiguration;
}
else {
return this.streamConfigGlobalProperties;
}
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.2
*/
@Configuration
public class MultiBinderPropertiesConfiguration {
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
@ConditionalOnBean(name = "outerContext")
public KafkaBinderConfigurationProperties binderConfigurationProperties(KafkaProperties kafkaProperties) {
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
}

View File

@@ -1,66 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.QueryableStoreType;
/**
* Registry that contains {@link QueryableStoreType}s those created from the user
* applications.
*
* @author Soby Chacko
* @author Renwei Han
* @since 2.0.0
* @deprecated in favor of {@link InteractiveQueryService}
*/
public class QueryableStoreRegistry {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public QueryableStoreRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
/**
* Retrieve and return a queryable store by name created in the application.
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
* @deprecated in favor of
* {@link InteractiveQueryService#getQueryableStore(String, QueryableStoreType)}
*/
public <T> T getQueryableStoreType(String storeName,
QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
// pass through
}
}
return null;
}
}

View File

@@ -16,109 +16,48 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.internals.ProcessorContextImpl;
import org.apache.kafka.streams.processor.internals.StreamTask;
import org.springframework.util.ReflectionUtils;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
/**
* Custom implementation for {@link DeserializationExceptionHandler} that sends the
* records in error to a DLQ topic, then continue stream processing on new records.
* Custom implementation for {@link ConsumerRecordRecoverer} that keeps a collection of
* recoverer objects per input topics. These topics might be per input binding or multiplexed
* topics in a single binding.
*
* @author Soby Chacko
* @since 2.0.0
*/
public class SendToDlqAndContinue implements DeserializationExceptionHandler {
/**
* Key used for DLQ dispatchers.
*/
public static final String KAFKA_STREAMS_DLQ_DISPATCHERS = "spring.cloud.stream.kafka.streams.dlq.dispatchers";
public class SendToDlqAndContinue implements ConsumerRecordRecoverer {
/**
* DLQ dispatcher per topic in the application context. The key here is not the actual
* DLQ topic but the incoming topic that caused the error.
*/
private Map<String, KafkaStreamsDlqDispatch> dlqDispatchers = new HashMap<>();
private Map<String, DeadLetterPublishingRecoverer> dlqDispatchers = new HashMap<>();
/**
* For a given topic, send the key/value record to DLQ topic.
* @param topic incoming topic that caused the error
* @param key to send
* @param value to send
* @param partition for the topic where this record should be sent
*
* @param consumerRecord consumer record
* @param exception exception
*/
public void sendToDlq(String topic, byte[] key, byte[] value, int partition) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(topic);
kafkaStreamsDlqDispatch.sendToDlq(key, value, partition);
}
@Override
@SuppressWarnings("unchecked")
public DeserializationHandlerResponse handle(ProcessorContext context,
ConsumerRecord<byte[], byte[]> record, Exception exception) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers
.get(record.topic());
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(),
record.partition());
context.commit();
// The following conditional block should be reconsidered when we have a solution
// for this SO problem:
// https://stackoverflow.com/questions/48470899/kafka-streams-deserialization-handler
// Currently it seems like when deserialization error happens, there is no commits
// happening and the
// following code will use reflection to get access to the underlying
// KafkaConsumer.
// It works with Kafka 1.0.0, but there is no guarantee it will work in future
// versions of kafka as
// we access private fields by name using reflection, but it is a temporary fix.
if (context instanceof ProcessorContextImpl) {
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl) context;
Field task = ReflectionUtils.findField(ProcessorContextImpl.class, "task");
ReflectionUtils.makeAccessible(task);
Object taskField = ReflectionUtils.getField(task, processorContextImpl);
if (taskField.getClass().isAssignableFrom(StreamTask.class)) {
StreamTask streamTask = (StreamTask) taskField;
Field consumer = ReflectionUtils.findField(StreamTask.class, "consumer");
ReflectionUtils.makeAccessible(consumer);
Object kafkaConsumerField = ReflectionUtils.getField(consumer,
streamTask);
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)) {
KafkaConsumer kafkaConsumer = (KafkaConsumer) kafkaConsumerField;
final Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>();
TopicPartition tp = new TopicPartition(record.topic(),
record.partition());
OffsetAndMetadata oam = new OffsetAndMetadata(record.offset() + 1);
consumedOffsetsAndMetadata.put(tp, oam);
kafkaConsumer.commitSync(consumedOffsetsAndMetadata);
}
}
}
return DeserializationHandlerResponse.CONTINUE;
}
@Override
@SuppressWarnings("unchecked")
public void configure(Map<String, ?> configs) {
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs
.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
public void sendToDlq(ConsumerRecord<?, ?> consumerRecord, Exception exception) {
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch = this.dlqDispatchers.get(consumerRecord.topic());
kafkaStreamsDlqDispatch.accept(consumerRecord, exception);
}
void addKStreamDlqDispatch(String topic,
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch) {
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch) {
this.dlqDispatchers.put(topic, kafkaStreamsDlqDispatch);
}
@Override
public void accept(ConsumerRecord<?, ?> consumerRecord, Exception e) {
this.dlqDispatchers.get(consumerRecord.topic()).accept(consumerRecord, e);
}
}

View File

@@ -0,0 +1,46 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.processor.ProcessorContext;
/**
*
* {@link DeserializationExceptionHandler} that allows to silently skip
* deserialization exceptions and continue processing.
*
* @author Soby Chakco
* @since 3.1.2
*/
public class SkipAndContinueExceptionHandler implements DeserializationExceptionHandler {
@Override
public DeserializationExceptionHandler.DeserializationHandlerResponse handle(final ProcessorContext context,
final ConsumerRecord<byte[], byte[]> record,
final Exception exception) {
return DeserializationExceptionHandler.DeserializationHandlerResponse.CONTINUE;
}
@Override
public void configure(final Map<String, ?> configs) {
// ignore
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,43 +16,62 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.context.SmartLifecycle;
import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
/**
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context and
* start them. As each one completes starting, register the associated KafkaStreams object
* into {@link QueryableStoreRegistry}.
* into {@link InteractiveQueryService}.
*
* This {@link SmartLifecycle} class ensures that the bean created from it is started very
* late through the bootstrap process by setting the phase value closer to
* Integer.MAX_VALUE. This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* {@link org.springframework.cloud.stream.annotation.StreamListener} method with multiple
* bindings is only started after all the binding phases have completed successfully.
* function with multiple bindings is only started after all the binding phases have completed successfully.
*
* @author Soby Chacko
*/
class StreamsBuilderFactoryManager implements SmartLifecycle {
public class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics;
private final KafkaStreamsMicrometerListener listener;
private volatile boolean running;
StreamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
private final KafkaProperties kafkaProperties;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener,
KafkaProperties kafkaProperties) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
this.listener = listener;
this.kafkaProperties = kafkaProperties;
}
@Override
public boolean isAutoStartup() {
return true;
return this.kafkaProperties == null || this.kafkaProperties.getStreams().isAutoStartup();
}
@Override
@@ -70,9 +89,26 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(
streamsBuilderFactoryBean.getKafkaStreams());
if (this.listener != null) {
streamsBuilderFactoryBean.addListener(this.listener);
}
// By default, we shut down the client if there is an uncaught exception in the application.
// Users can override this by customizing SBFB. See this issue for more details:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
// Starting the stream.
final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> bindingServicePropertiesPerSbfb =
this.kafkaStreamsBindingInformationCatalogue.getConsumerPropertiesPerSbfb();
final List<ConsumerProperties> consumerProperties = bindingServicePropertiesPerSbfb.get(streamsBuilderFactoryBean);
final boolean autoStartupDisabledOnAtLeastOneConsumerBinding = consumerProperties.stream().anyMatch(consumerProperties1 -> !consumerProperties1.isAutoStartup());
if (!autoStartupDisabledOnAtLeastOneConsumerBinding) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
}
this.running = true;
}
@@ -88,9 +124,14 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.removeListener(this.listener);
streamsBuilderFactoryBean.stop();
}
for (ProducerFactory<byte[], byte[]> dlqProducerFactory : this.kafkaStreamsBindingInformationCatalogue.getDlqProducerFactories()) {
((DisposableBean) dlqProducerFactory).destroy();
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);

View File

@@ -1,90 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
/**
* Bindable interface for {@link KStream} input and output.
*
* This interface can be used as a bindable interface with
* {@link org.springframework.cloud.stream.annotation.EnableBinding} when both input and
* output types are single KStream. In other scenarios where multiple types are required,
* other similar bindable interfaces can be created and used. For example, there are cases
* in which multiple KStreams are required on the outbound in the case of KStream
* branching or multiple input types are required either in the form of multiple KStreams
* and a combination of KStreams and KTables. In those cases, new bindable interfaces
* compatible with the requirements must be created. Here are some examples.
*
* <pre class="code">
* interface KStreamBranchProcessor {
* &#064;Input("input")
* KStream&lt;?, ?&gt; input();
*
* &#064;Output("output-1")
* KStream&lt;?, ?&gt; output1();
*
* &#064;Output("output-2")
* KStream&lt;?, ?&gt; output2();
*
* &#064;Output("output-3")
* KStream&lt;?, ?&gt; output3();
*
* ......
*
* }
*</pre>
*
* <pre class="code">
* interface KStreamKtableProcessor {
* &#064;Input("input-1")
* KStream&lt;?, ?&gt; input1();
*
* &#064;Input("input-2")
* KTable&lt;?, ?&gt; input2();
*
* &#064;Output("output")
* KStream&lt;?, ?&gt; output();
*
* ......
*
* }
*</pre>
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
public interface KafkaStreamsProcessor {
/**
* Input binding.
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input")
KStream<?, ?> input();
/**
* Output binding.
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
}

View File

@@ -1,115 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
/**
* Interface for Kafka Stream state store.
*
* This interface can be used to inject a state store specification into KStream building
* process so that the desired store can be built by StreamBuilder and added to topology
* for later use by processors. This is particularly useful when need to combine stream
* DSL with low level processor APIs. In those cases, if a writable state store is desired
* in processors, it needs to be created using this annotation. Here is the example.
*
* <pre class="code">
* &#064;StreamListener("input")
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW,
* size=300000)
* public void process(KStream&lt;Object, Product&gt; input) {
* ......
* }
* </pre>
*
* With that, you should be able to read/write this state store in your
* processor/transformer code.
*
* <pre class="code">
* new Processor&lt;Object, Product&gt;() {
* WindowStore&lt;Object, String&gt; state;
* &#064;Override
* public void init(ProcessorContext processorContext) {
* state = (WindowStore)processorContext.getStateStore("mystate");
* ......
* }
* }
* </pre>
*
* @author Lei Chen
*/
@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
public @interface KafkaStreamsStateStore {
/**
* Provides name of the state store.
* @return name of state store.
*/
String name() default "";
/**
* State store type.
* @return {@link KafkaStreamsStateStoreProperties.StoreType} of state store.
*/
KafkaStreamsStateStoreProperties.StoreType type() default KafkaStreamsStateStoreProperties.StoreType.KEYVALUE;
/**
* Serde used for key.
* @return key serde of state store.
*/
String keySerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Serde used for value.
* @return value serde of state store.
*/
String valueSerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Length in milli-second of Windowed store window.
* @return length in milli-second of window(for windowed store).
*/
long lengthMs() default 0;
/**
* Retention period for Windowed store windows.
* @return the maximum period of time in milli-second to keep each window in this
* store(for windowed store).
*/
long retentionMs() default 0;
/**
* Whether catching is enabled or not.
* @return whether caching should be enabled on the created store.
*/
boolean cache() default false;
/**
* Whether logging is enabled or not.
* @return whether logging should be enabled on the created store.
*/
boolean logging() default true;
}

View File

@@ -0,0 +1,73 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import java.util.ArrayList;
import java.util.List;
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.boot.actuate.endpoint.annotation.Selector;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.util.StringUtils;
/**
* Actuator endpoint for topology description.
*
* @author Soby Chacko
* @since 3.0.4
*/
@Endpoint(id = "kafkastreamstopology")
public class KafkaStreamsTopologyEndpoint {
/**
* Topology not found message.
*/
public static final String NO_TOPOLOGY_FOUND_MSG = "No topology found for the given application ID";
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public KafkaStreamsTopologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@ReadOperation
public List<String> kafkaStreamsTopologies() {
final List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsRegistry.streamsBuilderFactoryBeans();
final StringBuilder topologyDescription = new StringBuilder();
final List<String> descs = new ArrayList<>();
streamsBuilderFactoryBeans.stream()
.forEach(streamsBuilderFactoryBean ->
descs.add(streamsBuilderFactoryBean.getTopology().describe().toString()));
return descs;
}
@ReadOperation
public String kafkaStreamsTopology(@Selector String applicationId) {
if (!StringUtils.isEmpty(applicationId)) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamsBuilderFactoryBean(applicationId);
if (streamsBuilderFactoryBean != null) {
return streamsBuilderFactoryBean.getTopology().describe().toString();
}
else {
return NO_TOPOLOGY_FOUND_MSG;
}
}
return NO_TOPOLOGY_FOUND_MSG;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import org.springframework.boot.actuate.autoconfigure.endpoint.EndpointAutoConfiguration;
import org.springframework.boot.actuate.autoconfigure.endpoint.condition.ConditionalOnAvailableEndpoint;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.4
*/
@Configuration
@ConditionalOnClass(name = {
"org.springframework.boot.actuate.endpoint.annotation.Endpoint" })
@AutoConfigureAfter({EndpointAutoConfiguration.class, KafkaStreamsBinderSupportAutoConfiguration.class})
public class KafkaStreamsTopologyEndpointAutoConfiguration {
@Bean
@ConditionalOnAvailableEndpoint
public KafkaStreamsTopologyEndpoint topologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
return new KafkaStreamsTopologyEndpoint(kafkaStreamsRegistry);
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -17,71 +17,124 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.factory.BeanFactoryUtils;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
import org.springframework.context.annotation.ConditionContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.util.ClassUtils;
import org.springframework.util.CollectionUtils;
/**
* Custom {@link org.springframework.context.annotation.Condition} that detects the presence
* of java.util.Function|Consumer beans. Used for Kafka Streams function support.
*
* @author Soby Chakco
* @author Soby Chacko
* @since 2.2.0
*/
public class FunctionDetectorCondition extends SpringBootCondition {
private static final Log LOG = LogFactory.getLog(FunctionDetectorCondition.class);
@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
if (context != null && context.getBeanFactory() != null) {
Map functionTypes = context.getBeanFactory().getBeansOfType(Function.class);
functionTypes.putAll(context.getBeanFactory().getBeansOfType(Consumer.class));
final Map<String, Object> kstreamFunctions = pruneFunctionBeansForKafkaStreams(functionTypes, context);
if (!kstreamFunctions.isEmpty()) {
return ConditionOutcome.match("Matched. Function/Consumer beans found");
String[] functionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Function.class, true, false);
String[] consumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Consumer.class, true, false);
String[] biFunctionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiFunction.class, true, false);
String[] biConsumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiConsumer.class, true, false);
List<String> functionComponents = new ArrayList<>();
functionComponents.addAll(Arrays.asList(functionTypes));
functionComponents.addAll(Arrays.asList(consumerTypes));
functionComponents.addAll(Arrays.asList(biFunctionTypes));
functionComponents.addAll(Arrays.asList(biConsumerTypes));
List<String> kafkaStreamsFunctions = pruneFunctionBeansForKafkaStreams(functionComponents, context);
if (!CollectionUtils.isEmpty(kafkaStreamsFunctions)) {
return ConditionOutcome.match("Matched. Function/BiFunction/Consumer beans found");
}
else {
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
}
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
private static <T> Map<String, T> pruneFunctionBeansForKafkaStreams(Map<String, T> originalFunctionBeans,
private static List<String> pruneFunctionBeansForKafkaStreams(List<String> functionComponents,
ConditionContext context) {
final Map<String, T> prunedMap = new HashMap<>();
final List<String> prunedList = new ArrayList<>();
for (String key : originalFunctionBeans.keySet()) {
for (String key : functionComponents) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedMap.put(key, originalFunctionBeans.get(key));
Method[] methods = classObj.getMethods();
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
// check if the bean name is overridden.
if (!kafkaStreamMethod.isPresent()) {
final BeanDefinition beanDefinition = context.getBeanFactory().getBeanDefinition(key);
final String factoryMethodName = beanDefinition.getFactoryMethodName();
kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(factoryMethodName)).findFirst();
}
if (kafkaStreamMethod.isPresent()) {
Method method = kafkaStreamMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedList.add(key);
}
}
else {
//check if it is a @Component bean.
Optional<Method> componentBeanMethod = Arrays.stream(methods).filter(
m -> (m.getName().equals("apply") || m.getName().equals("accept"))
&& isKafkaStreamsTypeFound(m)).findFirst();
if (componentBeanMethod.isPresent()) {
Method method = componentBeanMethod.get();
final ResolvableType resolvableType1 = ResolvableType.forMethodParameter(method, 0);
final Class<?> rawClass = resolvableType1.getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedList.add(key);
}
}
}
}
catch (NoSuchMethodException e) {
//ignore
catch (Exception e) {
LOG.error("Function not found: " + key, e);
}
}
return prunedMap;
return prunedList;
}
private static boolean isKafkaStreamsTypeFound(Method method) {
return KStream.class.isAssignableFrom(method.getParameters()[0].getType()) ||
KTable.class.isAssignableFrom(method.getParameters()[0].getType()) ||
GlobalKTable.class.isAssignableFrom(method.getParameters()[0].getType());
}
}

View File

@@ -0,0 +1,286 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binding.AbstractBindableProxyFactory;
import org.springframework.cloud.stream.binding.BoundTargetHolder;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
/**
* Kafka Streams specific target bindings proxy factory. See {@link AbstractBindableProxyFactory} for more details.
* <p>
* Targets bound by this factory:
* <p>
* {@link KStream}
* {@link KTable}
* {@link GlobalKTable}
* <p>
* This class looks at the Function bean's return signature as {@link ResolvableType} and introspect the individual types,
* binding them on the way.
* <p>
* All types on the {@link ResolvableType} are bound except for KStream[] array types on the outbound, which will be
* deferred for binding at a later stage. The reason for doing that is because in this class, we don't have any way to know
* the actual size in the returned array. That has to wait until the function is invoked and we get a result.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFactory implements InitializingBean, BeanFactoryAware {
@Autowired
private StreamFunctionProperties streamFunctionProperties;
private ResolvableType[] types;
private Method method;
private final String functionName;
private BeanFactory beanFactory;
public KafkaStreamsBindableProxyFactory(ResolvableType[] types, String functionName, Method method) {
super(types[0].getType().getClass());
this.types = types;
this.functionName = functionName;
this.method = method;
}
@Override
public void afterPropertiesSet() {
populateBindingTargetFactories(beanFactory);
Assert.notEmpty(KafkaStreamsBindableProxyFactory.this.bindingTargetFactories,
"'bindingTargetFactories' cannot be empty");
int resolvableTypeDepthCounter = 0;
boolean isKafkaStreamsType = this.types[0].getRawClass().isAssignableFrom(KStream.class) ||
this.types[0].getRawClass().isAssignableFrom(KTable.class) ||
this.types[0].getRawClass().isAssignableFrom(GlobalKTable.class);
ResolvableType argument = isKafkaStreamsType ? this.types[0] : this.types[0].getGeneric(resolvableTypeDepthCounter++);
List<String> inputBindings = buildInputBindings();
Iterator<String> iterator = inputBindings.iterator();
String next = iterator.next();
bindInput(argument, next);
// Check if its a component style bean.
if (method != null) {
final Object bean = beanFactory.getBean(functionName);
if (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass())) {
argument = ResolvableType.forMethodParameter(method, 1);
next = iterator.next();
bindInput(argument, next);
}
}
// Normal functional bean
if (this.types[0].getRawClass() != null &&
(this.types[0].getRawClass().isAssignableFrom(BiFunction.class) ||
this.types[0].getRawClass().isAssignableFrom(BiConsumer.class))) {
argument = this.types[0].getGeneric(resolvableTypeDepthCounter++);
next = iterator.next();
bindInput(argument, next);
}
ResolvableType outboundArgument;
if (method != null) {
outboundArgument = ResolvableType.forMethodReturnType(method);
}
else {
outboundArgument = this.types[0].getGeneric(resolvableTypeDepthCounter);
}
while (isAnotherFunctionOrConsumerFound(outboundArgument)) {
//The function is a curried function. We should introspect the partial function chain hierarchy.
argument = outboundArgument.getGeneric(0);
String next1 = iterator.next();
bindInput(argument, next1);
outboundArgument = outboundArgument.getGeneric(1);
}
final int lastTypeIndex = this.types.length - 1;
if (this.types.length > 1 && this.types[lastTypeIndex] != null && this.types[lastTypeIndex].getRawClass() != null) {
if (this.types[lastTypeIndex].getRawClass().isAssignableFrom(Function.class) ||
this.types[lastTypeIndex].getRawClass().isAssignableFrom(Consumer.class)) {
outboundArgument = this.types[lastTypeIndex].getGeneric(1);
}
}
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
(outboundArgument.getRawClass().isAssignableFrom(KStream.class) ||
outboundArgument.getRawClass().isAssignableFrom(KTable.class)))) { //Allowing both KStream and KTable on the outbound.
// if the type is array, we need to do a late binding as we don't know the number of
// output bindings at this point in the flow.
List<String> outputBindings = streamFunctionProperties.getOutputBindings(this.functionName);
String outputBinding = null;
if (!CollectionUtils.isEmpty(outputBindings)) {
Iterator<String> outputBindingsIter = outputBindings.iterator();
if (outputBindingsIter.hasNext()) {
outputBinding = outputBindingsIter.next();
}
}
else {
outputBinding = String.format("%s-%s-0", this.functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX);
}
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
// We will only allow KStream targets on the outbound. If the user provides a KTable,
// we still use the KStreamBinder to send it through the outbound.
// In that case before sending, we do a cast from KTable to KStream.
// See KafkaStreamsFunctionsProcessor#setupFunctionInvokerForKafkaStreams for details.
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
new BoundTargetHolder(getBindingTargetFactory(KStream.class)
.createOutput(outputBinding), true));
String outputBinding1 = outputBinding;
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> outputHolders.get(outputBinding1).getBoundTarget());
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
registry.registerBeanDefinition(outputBinding1, rootBeanDefinition1);
}
}
private boolean isAnotherFunctionOrConsumerFound(ResolvableType arg1) {
return arg1 != null && !arg1.isArray() && arg1.getRawClass() != null &&
(arg1.getRawClass().isAssignableFrom(Function.class) || arg1.getRawClass().isAssignableFrom(Consumer.class));
}
/**
* If the application provides the property spring.cloud.stream.function.inputBindings.functionName,
* that gets precedence. Otherwise, use functionName-input or functionName-input-0, functionName-input-1 and so on
* for multiple inputs.
*
* @return an ordered collection of input bindings to use
*/
private List<String> buildInputBindings() {
List<String> inputs = new ArrayList<>();
List<String> inputBindings = streamFunctionProperties.getInputBindings(this.functionName);
if (!CollectionUtils.isEmpty(inputBindings)) {
inputs.addAll(inputBindings);
return inputs;
}
int numberOfInputs = this.types[0].getRawClass() != null &&
(this.types[0].getRawClass().isAssignableFrom(BiFunction.class) ||
this.types[0].getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
// For @Component style beans.
if (method != null) {
final ResolvableType returnType = ResolvableType.forMethodReturnType(method);
Object bean = beanFactory.containsBean(functionName) ? beanFactory.getBean(functionName) : null;
if (bean != null && (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass()))) {
numberOfInputs = 2;
}
else if (returnType.getRawClass().isAssignableFrom(Function.class) || returnType.getRawClass().isAssignableFrom(Consumer.class)) {
numberOfInputs = 1;
ResolvableType arg1 = returnType;
while (isAnotherFunctionOrConsumerFound(arg1)) {
arg1 = arg1.getGeneric(1);
numberOfInputs++;
}
}
}
int i = 0;
while (i < numberOfInputs) {
inputs.add(String.format("%s-%s-%d", this.functionName, FunctionConstants.DEFAULT_INPUT_SUFFIX, i++));
}
return inputs;
}
private int getNumberOfInputs() {
int numberOfInputs = 1;
ResolvableType arg1 = this.types[0].getGeneric(1);
while (isAnotherFunctionOrConsumerFound(arg1)) {
arg1 = arg1.getGeneric(1);
numberOfInputs++;
}
return numberOfInputs;
}
private void bindInput(ResolvableType arg0, String inputName) {
if (arg0.getRawClass() != null) {
KafkaStreamsBindableProxyFactory.this.inputHolders.put(inputName,
new BoundTargetHolder(getBindingTargetFactory(arg0.getRawClass())
.createInput(inputName), true));
}
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition();
rootBeanDefinition.setInstanceSupplier(() -> inputHolders.get(inputName).getBoundTarget());
registry.registerBeanDefinition(inputName, rootBeanDefinition);
}
@Override
public Set<String> getInputs() {
Set<String> ins = new LinkedHashSet<>();
this.inputHolders.forEach((s, BoundTargetHolder) -> ins.add(s));
return ins;
}
@Override
public Set<String> getOutputs() {
Set<String> outs = new LinkedHashSet<>();
this.outputHolders.forEach((s, BoundTargetHolder) -> outs.add(s));
return outs;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
public void addOutputBinding(String output, Class<?> clazz) {
KafkaStreamsBindableProxyFactory.this.outputHolders.put(output,
new BoundTargetHolder(getBindingTargetFactory(clazz)
.createOutput(output), true));
}
public String getFunctionName() {
return functionName;
}
public Map<String, BoundTargetHolder> getOutputHolders() {
return outputHolders;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -35,13 +35,17 @@ public class KafkaStreamsFunctionAutoConfiguration {
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessorInvoker kafkaStreamsFunctionProcessorInvoker(
KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories,
StreamFunctionProperties streamFunctionProperties) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
kafkaStreamsFunctionProcessor);
kafkaStreamsFunctionProcessor, kafkaStreamsBindableProxyFactories, kafkaStreamsFunctionBeanPostProcessor.getMethods(),
streamFunctionProperties);
}
@Bean
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor() {
return new KafkaStreamsFunctionBeanPostProcessor();
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
return new KafkaStreamsFunctionBeanPostProcessor(streamFunctionProperties);
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -17,43 +17,156 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.ClassUtils;
import org.springframework.util.StringUtils;
/**
*
* @author Soby Chacko
* @since 2.2.0
*
*/
class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionBeanPostProcessor.class);
private static final String[] EXCLUDE_FUNCTIONS = new String[]{"functionRouter", "sendToDlqAndContinue"};
private ConfigurableListableBeanFactory beanFactory;
private boolean onlySingleFunction;
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
private Map<String, Method> methods = new TreeMap<>();
private final StreamFunctionProperties streamFunctionProperties;
private Map<String, ResolvableType> kafkaStreamsOnlyResolvableTypes = new HashMap<>();
private Map<String, Method> kafakStreamsOnlyMethods = new HashMap<>();
public KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
this.streamFunctionProperties = streamFunctionProperties;
}
public Map<String, ResolvableType> getResolvableTypes() {
return this.resolvableTypeMap;
}
public Map<String, Method> getMethods() {
return methods;
}
@Override
public void afterPropertiesSet() {
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
String[] biFunctionNames = this.beanFactory.getBeanNamesForType(BiFunction.class);
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
String[] biConsumerNames = this.beanFactory.getBeanNamesForType(BiConsumer.class);
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)).forEach(this::extractResolvableTypes);
final Stream<String> concat = Stream.concat(
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)),
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
final List<String> collect = concat.collect(Collectors.toList());
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
collect.removeIf(Pattern.compile(".*_registration").asPredicate());
onlySingleFunction = collect.size() == 1;
collect.stream()
.forEach(this::extractResolvableTypes);
kafkaStreamsOnlyResolvableTypes.keySet().forEach(k -> addResolvableTypeInfo(k, kafkaStreamsOnlyResolvableTypes.get(k)));
kafakStreamsOnlyMethods.keySet().forEach(k -> addResolvableTypeInfo(k, kafakStreamsOnlyMethods.get(k)));
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
final String definition = streamFunctionProperties.getDefinition();
final String[] functionUnits = StringUtils.hasText(definition) ? definition.split(";") : new String[]{};
final Set<String> kafkaStreamsMethodNames = new HashSet<>(kafkaStreamsOnlyResolvableTypes.keySet());
kafkaStreamsMethodNames.addAll(this.resolvableTypeMap.keySet());
if (functionUnits.length == 0) {
for (String s : getResolvableTypes().keySet()) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(s)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, s, resolvableTypes, rootBeanDefinition);
}
}
else {
for (String functionUnit : functionUnits) {
if (functionUnit.contains("|")) {
final String[] composedFunctions = functionUnit.split("\\|");
String derivedNameFromComposed = "";
ResolvableType[] resolvableTypes = new ResolvableType[composedFunctions.length];
int i = 0;
boolean nonKafkaStreamsFunctionsFound = false;
for (String split : composedFunctions) {
derivedNameFromComposed = derivedNameFromComposed.concat(split);
resolvableTypes[i++] = getResolvableTypes().get(split);
if (!kafkaStreamsMethodNames.contains(split)) {
nonKafkaStreamsFunctionsFound = true;
break;
}
}
if (!nonKafkaStreamsFunctionsFound) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, derivedNameFromComposed, resolvableTypes, rootBeanDefinition);
}
}
else {
// Ensure that the function unit is a Kafka Streams function
if (kafkaStreamsMethodNames.contains(functionUnit)) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(functionUnit)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, functionUnit, resolvableTypes, rootBeanDefinition);
}
}
}
}
}
private void registerKakaStreamsProxyFactory(BeanDefinitionRegistry registry, String s, ResolvableType[] resolvableTypes, RootBeanDefinition rootBeanDefinition) {
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(resolvableTypes);
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(s);
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(getMethods().get(s));
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
}
private void extractResolvableTypes(String key) {
@@ -62,15 +175,97 @@ class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFac
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
Method[] methods = classObj.getMethods();
Optional<Method> functionalBeanMethods = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
if (!functionalBeanMethods.isPresent()) {
final BeanDefinition beanDefinition = this.beanFactory.getBeanDefinition(key);
final String factoryMethodName = beanDefinition.getFactoryMethodName();
functionalBeanMethods = Arrays.stream(methods).filter(m -> m.getName().equals(factoryMethodName)).findFirst();
}
if (functionalBeanMethods.isPresent()) {
Method method = functionalBeanMethods.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
if (onlySingleFunction) {
resolvableTypeMap.put(key, resolvableType);
}
else {
discoverOnlyKafkaStreamsResolvableTypes(key, resolvableType);
}
}
}
else {
Optional<Method> componentBeanMethods = Arrays.stream(methods)
.filter(m -> m.getName().equals("apply") && isKafkaStreamsTypeFound(m) ||
m.getName().equals("accept") && isKafkaStreamsTypeFound(m)).findFirst();
if (componentBeanMethods.isPresent()) {
Method method = componentBeanMethods.get();
final ResolvableType resolvableType = ResolvableType.forMethodParameter(method, 0);
final Class<?> rawClass = resolvableType.getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
if (onlySingleFunction) {
resolvableTypeMap.put(key, resolvableType);
this.methods.put(key, method);
}
else {
discoverOnlyKafkaStreamsResolvableTypesAndMethods(key, resolvableType, method);
}
}
}
}
}
catch (Exception e) {
LOG.error("Function activation issues while mapping the function: " + key, e);
}
}
private void addResolvableTypeInfo(String key, ResolvableType resolvableType) {
if (kafkaStreamsOnlyResolvableTypes.size() == 1) {
resolvableTypeMap.put(key, resolvableType);
}
catch (NoSuchMethodException e) {
//ignore
else {
final String definition = streamFunctionProperties.getDefinition();
if (definition == null) {
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
}
else if (definition.contains(key)) {
resolvableTypeMap.put(key, resolvableType);
}
}
}
private void discoverOnlyKafkaStreamsResolvableTypes(String key, ResolvableType resolvableType) {
kafkaStreamsOnlyResolvableTypes.put(key, resolvableType);
}
private void discoverOnlyKafkaStreamsResolvableTypesAndMethods(String key, ResolvableType resolvableType, Method method) {
kafkaStreamsOnlyResolvableTypes.put(key, resolvableType);
kafakStreamsOnlyMethods.put(key, method);
}
private void addResolvableTypeInfo(String key, Method method) {
if (kafakStreamsOnlyMethods.size() == 1) {
this.methods.put(key, method);
}
else {
final String definition = streamFunctionProperties.getDefinition();
if (definition == null) {
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
}
else if (definition.contains(key)) {
this.methods.put(key, method);
}
}
}
private boolean isKafkaStreamsTypeFound(Method method) {
return KStream.class.isAssignableFrom(method.getParameters()[0].getType()) ||
KTable.class.isAssignableFrom(method.getParameters()[0].getType()) ||
GlobalKTable.class.isAssignableFrom(method.getParameters()[0].getType());
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = (ConfigurableListableBeanFactory) beanFactory;

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,32 +16,74 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Map;
import java.util.Optional;
import javax.annotation.PostConstruct;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.StringUtils;
/**
*
* @author Soby Chacko
* @since 2.1.0
*/
class KafkaStreamsFunctionProcessorInvoker {
public class KafkaStreamsFunctionProcessorInvoker {
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
private final Map<String, ResolvableType> resolvableTypeMap;
private final KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories;
private final Map<String, Method> methods;
private final StreamFunctionProperties streamFunctionProperties;
KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
public KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories,
Map<String, Method> methods, StreamFunctionProperties streamFunctionProperties) {
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
this.resolvableTypeMap = resolvableTypeMap;
this.kafkaStreamsBindableProxyFactories = kafkaStreamsBindableProxyFactories;
this.methods = methods;
this.streamFunctionProperties = streamFunctionProperties;
}
@PostConstruct
void invoke() {
resolvableTypeMap.forEach((key, value) ->
this.kafkaStreamsFunctionProcessor.orchestrateFunctionInvoking(value, key));
final String definition = streamFunctionProperties.getDefinition();
final String[] functionUnits = StringUtils.hasText(definition) ? definition.split(";") : new String[]{};
if (functionUnits.length == 0) {
resolvableTypeMap.forEach((key, value) -> {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(key)).findFirst();
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key, proxyFactory.get(), methods.get(key), null);
});
}
for (String functionUnit : functionUnits) {
if (functionUnit.contains("|")) {
final String[] composedFunctions = functionUnit.split("\\|");
String[] derivedNameFromComposed = new String[]{""};
for (String split : composedFunctions) {
derivedNameFromComposed[0] = derivedNameFromComposed[0].concat(split);
}
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(derivedNameFromComposed[0])).findFirst();
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(composedFunctions[0]),
derivedNameFromComposed[0], kafkaStreamsBindableProxyFactory, methods.get(derivedNameFromComposed[0]), resolvableTypeMap.get(composedFunctions[composedFunctions.length - 1]), composedFunctions));
}
else {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(functionUnit)).findFirst();
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(functionUnit), functionUnit,
kafkaStreamsBindableProxyFactory, methods.get(functionUnit), null));
}
}
}
}

View File

@@ -1,43 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Type;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.function.context.WrapperDetector;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionWrapperDetector implements WrapperDetector {
@Override
public boolean isWrapper(Type type) {
if (type instanceof Class<?>) {
Class<?> cls = (Class<?>) type;
return KStream.class.isAssignableFrom(cls) ||
KTable.class.isAssignableFrom(cls) ||
GlobalKTable.class.isAssignableFrom(cls);
}
return false;
}
}

View File

@@ -1,72 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
/**
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications.
* This class provides convenient ways to access the commonly used kafka stream properties
* from the user application. For example, windowing operations are common use cases in
* stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @deprecated The properties exposed by this class can be used directly on Kafka Streams API in the application.
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
@Deprecated
public class KafkaStreamsApplicationSupportProperties {
private TimeWindow timeWindow;
public TimeWindow getTimeWindow() {
return this.timeWindow;
}
public void setTimeWindow(TimeWindow timeWindow) {
this.timeWindow = timeWindow;
}
/**
* Properties required by time windows.
*/
public static class TimeWindow {
private int length;
private int advanceBy;
public int getLength() {
return this.length;
}
public void setLength(int length) {
this.length = length;
}
public int getAdvanceBy() {
return this.advanceBy;
}
public void setAdvanceBy(int advanceBy) {
this.advanceBy = advanceBy;
}
}
}

View File

@@ -16,8 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Kafka Streams binder configuration properties.
@@ -34,7 +38,10 @@ public class KafkaStreamsBinderConfigurationProperties
/**
* Enumeration for various Serde errors.
*
* @deprecated in favor of {@link DeserializationExceptionHandler}.
*/
@Deprecated
public enum SerdeError {
/**
@@ -54,6 +61,37 @@ public class KafkaStreamsBinderConfigurationProperties
private String applicationId;
private StateStoreRetry stateStoreRetry = new StateStoreRetry();
private Map<String, Functions> functions = new HashMap<>();
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a deserialization exception. This handler will be applied against all input bindings
* unless overridden at the consumer binding.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
private boolean includeStoppedProcessorsForHealthCheck;
public Map<String, Functions> getFunctions() {
return functions;
}
public void setFunctions(Map<String, Functions> functions) {
this.functions = functions;
}
public StateStoreRetry getStateStoreRetry() {
return stateStoreRetry;
}
public void setStateStoreRetry(StateStoreRetry stateStoreRetry) {
this.stateStoreRetry = stateStoreRetry;
}
public String getApplicationId() {
return this.applicationId;
}
@@ -62,21 +100,92 @@ public class KafkaStreamsBinderConfigurationProperties
this.applicationId = applicationId;
}
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a Serde error.
* {@link KafkaStreamsBinderConfigurationProperties.SerdeError} values are used to
* provide the exception handler on consumer binding.
*/
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
@Deprecated
public KafkaStreamsBinderConfigurationProperties.SerdeError getSerdeError() {
return this.serdeError;
}
@Deprecated
public void setSerdeError(
KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
this.serdeError = serdeError;
this.serdeError = serdeError;
if (serdeError == SerdeError.logAndContinue) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndContinue;
}
else if (serdeError == SerdeError.logAndFail) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndFail;
}
else if (serdeError == SerdeError.sendToDlq) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.sendToDlq;
}
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public boolean isIncludeStoppedProcessorsForHealthCheck() {
return includeStoppedProcessorsForHealthCheck;
}
public void setIncludeStoppedProcessorsForHealthCheck(boolean includeStoppedProcessorsForHealthCheck) {
this.includeStoppedProcessorsForHealthCheck = includeStoppedProcessorsForHealthCheck;
}
public static class StateStoreRetry {
private int maxAttempts = 1;
private long backoffPeriod = 1000;
public int getMaxAttempts() {
return maxAttempts;
}
public void setMaxAttempts(int maxAttempts) {
this.maxAttempts = maxAttempts;
}
public long getBackoffPeriod() {
return backoffPeriod;
}
public void setBackoffPeriod(long backoffPeriod) {
this.backoffPeriod = backoffPeriod;
}
}
public static class Functions {
/**
* Function specific application id.
*/
private String applicationId;
/**
* Funcion specific configuraiton to use.
*/
private Map<String, String> configuration;
public String getApplicationId() {
return applicationId;
}
public void setApplicationId(String applicationId) {
this.applicationId = applicationId;
}
public Map<String, String> getConfiguration() {
return configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
}
}

View File

@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Extended properties for Kafka Streams consumer.
@@ -43,6 +44,33 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
*/
private String materializedAs;
/**
* Per input binding deserialization handler.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
/**
* {@link org.apache.kafka.streams.processor.TimestampExtractor} bean name to use for this consumer.
*/
private String timestampExtractorBeanName;
/**
* Comma separated list of supported event types for this binding.
*/
private String eventTypes;
/**
* Record level header key for event type.
* If the default value is overridden, then that is expected on each record header if eventType based
* routing is enabled on this binding (by setting eventTypes).
*/
private String eventTypeHeaderKey = "event_type";
/**
* Custom name for the source component from which the processor is consuming from.
*/
private String consumedAs;
public String getApplicationId() {
return this.applicationId;
}
@@ -75,4 +103,43 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
this.materializedAs = materializedAs;
}
public String getTimestampExtractorBeanName() {
return timestampExtractorBeanName;
}
public void setTimestampExtractorBeanName(String timestampExtractorBeanName) {
this.timestampExtractorBeanName = timestampExtractorBeanName;
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public String getEventTypes() {
return eventTypes;
}
public void setEventTypes(String eventTypes) {
this.eventTypes = eventTypes;
}
public String getEventTypeHeaderKey() {
return this.eventTypeHeaderKey;
}
public void setEventTypeHeaderKey(String eventTypeHeaderKey) {
this.eventTypeHeaderKey = eventTypeHeaderKey;
}
public String getConsumedAs() {
return consumedAs;
}
public void setConsumedAs(String consumedAs) {
this.consumedAs = consumedAs;
}
}

View File

@@ -36,6 +36,16 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
*/
private String valueSerde;
/**
* {@link org.apache.kafka.streams.processor.StreamPartitioner} to be used on Kafka Streams producer.
*/
private String streamPartitionerBeanName;
/**
* Custom name for the sink component to which the processor is producing to.
*/
private String producedAs;
public String getKeySerde() {
return this.keySerde;
}
@@ -52,4 +62,19 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
this.valueSerde = valueSerde;
}
public String getStreamPartitionerBeanName() {
return this.streamPartitionerBeanName;
}
public void setStreamPartitionerBeanName(String streamPartitionerBeanName) {
this.streamPartitionerBeanName = streamPartitionerBeanName;
}
public String getProducedAs() {
return producedAs;
}
public void setProducedAs(String producedAs) {
this.producedAs = producedAs;
}
}

View File

@@ -1,161 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.properties;
/**
* Properties for Kafka Streams state store.
*
* @author Lei Chen
*/
public class KafkaStreamsStateStoreProperties {
/**
* Enumeration for store type.
*/
public enum StoreType {
/**
* Key value store.
*/
KEYVALUE("keyvalue"),
/**
* Window store.
*/
WINDOW("window"),
/**
* Session store.
*/
SESSION("session");
private final String type;
StoreType(final String type) {
this.type = type;
}
@Override
public String toString() {
return this.type;
}
}
/**
* Name for this state store.
*/
private String name;
/**
* Type for this state store.
*/
private StoreType type;
/**
* Size/length of this state store in ms. Only applicable for window store.
*/
private long length;
/**
* Retention period for this state store in ms.
*/
private long retention;
/**
* Key serde class specified per state store.
*/
private String keySerdeString;
/**
* Value serde class specified per state store.
*/
private String valueSerdeString;
/**
* Whether caching is enabled on this state store.
*/
private boolean cacheEnabled;
/**
* Whether logging is enabled on this state store.
*/
private boolean loggingDisabled;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public StoreType getType() {
return this.type;
}
public void setType(StoreType type) {
this.type = type;
}
public long getLength() {
return this.length;
}
public void setLength(long length) {
this.length = length;
}
public long getRetention() {
return this.retention;
}
public void setRetention(long retention) {
this.retention = retention;
}
public String getKeySerdeString() {
return this.keySerdeString;
}
public void setKeySerdeString(String keySerdeString) {
this.keySerdeString = keySerdeString;
}
public String getValueSerdeString() {
return this.valueSerdeString;
}
public void setValueSerdeString(String valueSerdeString) {
this.valueSerdeString = valueSerdeString;
}
public boolean isCacheEnabled() {
return this.cacheEnabled;
}
public void setCacheEnabled(boolean cacheEnabled) {
this.cacheEnabled = cacheEnabled;
}
public boolean isLoggingDisabled() {
return this.loggingDisabled;
}
public void setLoggingDisabled(boolean loggingDisabled) {
this.loggingDisabled = loggingDisabled;
}
}

View File

@@ -0,0 +1,245 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;
import java.util.PriorityQueue;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.kafka.support.serializer.JsonSerde;
/**
* A convenient {@link Serde} for {@link java.util.Collection} implementations.
*
* Whenever a Kafka Stream application needs to collect data into a container object like
* {@link java.util.Collection}, then this Serde class can be used as a convenience for
* serialization needs. Some examples of where using this may handy is when the application
* needs to do aggregation or reduction operations where it needs to simply hold an
* {@link Iterable} type.
*
* By default, this Serde will use {@link JsonSerde} for serializing the inner objects.
* This can be changed by providing an explicit Serde during creation of this object.
*
* Here is an example of a possible use case:
*
* <pre class="code">
* .aggregate(ArrayList::new,
* (k, v, aggregates) -&gt; {
* aggregates.add(v);
* return aggregates;
* },
* Materialized.&lt;String, Collection&lt;Foo&gt;, WindowStore&lt;Bytes, byte[]&gt;&gt;as(
* "foo-store")
* .withKeySerde(Serdes.String())
* .withValueSerde(new CollectionSerde&lt;&gt;(Foo.class, ArrayList.class)))
* * </pre>
*
* Supported Collection types by this Serde are - {@link java.util.ArrayList}, {@link java.util.LinkedList},
* {@link java.util.PriorityQueue} and {@link java.util.HashSet}. Deserializer will throw an exception
* if any other Collection types are used.
*
* @param <E> type of the underlying object that the collection holds
* @author Soby Chacko
* @since 3.0.0
*/
public class CollectionSerde<E> implements Serde<Collection<E>> {
/**
* Serde used for serializing the inner object.
*/
private final Serde<Collection<E>> inner;
/**
* Type of the collection class. This has to be a class that is
* implementing the {@link java.util.Collection} interface.
*/
private final Class<?> collectionClass;
/**
* Constructor to use when the application wants to specify the type
* of the Serde used for the inner object.
*
* @param serde specify an explicit Serde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Serde<E> serde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
this.inner =
Serdes.serdeFrom(
new CollectionSerializer<>(serde.serializer()),
new CollectionDeserializer<>(serde.deserializer(), collectionsClass));
}
/**
* Constructor to delegate serialization operations for the inner objects
* to {@link JsonSerde}.
*
* @param targetTypeForJsonSerde target type used by the JsonSerde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Class<?> targetTypeForJsonSerde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
try (JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde)) {
this.inner = Serdes.serdeFrom(
new CollectionSerializer<>(jsonSerde.serializer()),
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
}
}
@Override
public Serializer<Collection<E>> serializer() {
return inner.serializer();
}
@Override
public Deserializer<Collection<E>> deserializer() {
return inner.deserializer();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
inner.serializer().configure(configs, isKey);
inner.deserializer().configure(configs, isKey);
}
@Override
public void close() {
inner.serializer().close();
inner.deserializer().close();
}
private static class CollectionSerializer<E> implements Serializer<Collection<E>> {
private Serializer<E> inner;
CollectionSerializer(Serializer<E> inner) {
this.inner = inner;
}
CollectionSerializer() { }
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, Collection<E> collection) {
final int size = collection.size();
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final DataOutputStream dos = new DataOutputStream(baos);
final Iterator<E> iterator = collection.iterator();
try {
dos.writeInt(size);
while (iterator.hasNext()) {
final byte[] bytes = inner.serialize(topic, iterator.next());
dos.writeInt(bytes.length);
dos.write(bytes);
}
}
catch (IOException e) {
throw new RuntimeException("Unable to serialize the provided collection", e);
}
return baos.toByteArray();
}
@Override
public void close() {
inner.close();
}
}
private static class CollectionDeserializer<E> implements Deserializer<Collection<E>> {
private final Deserializer<E> valueDeserializer;
private final Class<?> collectionClass;
CollectionDeserializer(final Deserializer<E> valueDeserializer, Class<?> collectionClass) {
this.valueDeserializer = valueDeserializer;
this.collectionClass = collectionClass;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public Collection<E> deserialize(String topic, byte[] bytes) {
if (bytes == null || bytes.length == 0) {
return null;
}
Collection<E> collection = getCollection();
final DataInputStream dataInputStream = new DataInputStream(new ByteArrayInputStream(bytes));
try {
final int records = dataInputStream.readInt();
for (int i = 0; i < records; i++) {
final byte[] valueBytes = new byte[dataInputStream.readInt()];
final int read = dataInputStream.read(valueBytes);
if (read != -1) {
collection.add(valueDeserializer.deserialize(topic, valueBytes));
}
}
}
catch (IOException e) {
throw new RuntimeException("Unable to deserialize collection", e);
}
return collection;
}
@Override
public void close() {
}
private Collection<E> getCollection() {
Collection<E> collection;
if (this.collectionClass.isAssignableFrom(ArrayList.class)) {
collection = new ArrayList<>();
}
else if (this.collectionClass.isAssignableFrom(HashSet.class)) {
collection = new HashSet<>();
}
else if (this.collectionClass.isAssignableFrom(LinkedList.class)) {
collection = new LinkedList<>();
}
else if (this.collectionClass.isAssignableFrom(PriorityQueue.class)) {
collection = new PriorityQueue<>();
}
else {
throw new IllegalArgumentException("Unsupported collection type - " + this.collectionClass);
}
return collection;
}
}
}

View File

@@ -1,228 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverterFactory}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the
* {@link CompositeNonNativeSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 2.1
*/
public class CompositeNonNativeSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final CompositeNonNativeDeserializer<T> compositeNonNativeDeserializer;
private final CompositeNonNativeSerializer<T> compositeNonNativeSerializer;
public CompositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(
compositeMessageConverterFactory);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(
compositeMessageConverterFactory);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.compositeNonNativeDeserializer.configure(configs, isKey);
this.compositeNonNativeSerializer.configure(configs, isKey);
}
@Override
public void close() {
// No-op
}
@Override
public Serializer<T> serializer() {
return this.compositeNonNativeSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.compositeNonNativeDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class CompositeNonNativeDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
CompositeNonNativeDeserializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class CompositeNonNativeSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
CompositeNonNativeSerializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
// No-op
}
}
}

View File

@@ -1,9 +1,5 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.ExtendedBindingHandlerMappingsProviderAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration
org.springframework.cloud.function.context.WrapperDetector=\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionWrapperDetector
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpointAutoConfiguration

View File

@@ -0,0 +1,83 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.junit.jupiter.api.Test;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.runner.ApplicationContextRunner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Tests for {@link ExtendedBindingHandlerMappingsProviderAutoConfiguration}.
*/
class ExtendedBindingHandlerMappingsProviderAutoConfigurationTests {
private final ApplicationContextRunner contextRunner = new ApplicationContextRunner()
.withUserConfiguration(KafkaStreamsTestApp.class)
.withPropertyValues(
"spring.cloud.stream.kafka.streams.default.consumer.application-id: testApp123",
"spring.cloud.stream.kafka.streams.default.consumer.consumed-as: default-consumer",
"spring.cloud.stream.kafka.streams.default.consumer.materialized-as: default-materializer",
"spring.cloud.stream.kafka.streams.default.producer.produced-as: default-producer",
"spring.cloud.stream.kafka.streams.default.producer.key-serde: default-foo");
@Test
void defaultsUsedWhenNoCustomBindingProperties() {
this.contextRunner.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "default-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "default-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@Test
void defaultsRespectedWhenCustomBindingProperties() {
this.contextRunner
.withPropertyValues(
"spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.consumed-as: custom-consumer",
"spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.produced-as: custom-producer")
.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "custom-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "custom-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@EnableAutoConfiguration
static class KafkaStreamsTestApp {
}
}

View File

@@ -0,0 +1,265 @@
/*
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsEventTypeRoutingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"foo-1", "foo-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<Integer, Foo> consumer;
private static CountDownLatch LATCH = new CountDownLatch(3);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("test-group-1", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put("value.deserializer", JsonDeserializer.class);
consumerProps.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
DefaultKafkaConsumerFactory<Integer, Foo> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "foo-2");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003 for more context on this test.
@Test
public void testRoutingWorksBasedOnEventTypes() {
SpringApplication app = new SpringApplication(EventTypeRoutingTestConfig.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.bindings.process-in-0.destination=foo-1",
"--spring.cloud.stream.bindings.process-out-0.destination=foo-2",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypes=foo,bar",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-foo-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put("value.serializer", JsonSerializer.class);
DefaultKafkaProducerFactory<Integer, Foo> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, Foo> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo-1");
Foo foo1 = new Foo();
foo1.setFoo("foo-1");
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("event_type", "foo".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord1 = new ProducerRecord<>("foo-1", 0, 56, foo1, headers);
template.send(producerRecord1);
Foo foo2 = new Foo();
foo2.setFoo("foo-2");
final ProducerRecord<Integer, Foo> producerRecord2 = new ProducerRecord<>("foo-1", 0, 57, foo2);
template.send(producerRecord2);
Foo foo3 = new Foo();
foo3.setFoo("foo-3");
final ProducerRecord<Integer, Foo> producerRecord3 = new ProducerRecord<>("foo-1", 0, 58, foo3, headers);
template.send(producerRecord3);
Foo foo4 = new Foo();
foo4.setFoo("foo-4");
Headers headers1 = new RecordHeaders();
headers1.add(new RecordHeader("event_type", "bar".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord4 = new ProducerRecord<>("foo-1", 0, 59, foo4, headers1);
template.send(producerRecord4);
final ConsumerRecords<Integer, Foo> records = KafkaTestUtils.getRecords(consumer);
assertThat(records.count()).isEqualTo(3);
List<Integer> keys = new ArrayList<>();
List<Foo> values = new ArrayList<>();
records.forEach(integerFooConsumerRecord -> {
keys.add(integerFooConsumerRecord.key());
values.add(integerFooConsumerRecord.value());
});
assertThat(keys).containsExactlyInAnyOrder(56, 58, 59);
assertThat(values).containsExactlyInAnyOrder(foo1, foo3, foo4);
}
finally {
pf.destroy();
}
}
}
@Test
public void testRoutingWorksBasedOnEventTypesConsumer() throws Exception {
SpringApplication app = new SpringApplication(EventTypeRoutingTestConfig.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=consumer",
"--spring.cloud.stream.bindings.consumer-in-0.destination=foo-consumer-1",
"--spring.cloud.stream.kafka.streams.bindings.consumer-in-0.consumer.eventTypes=foo,bar",
"--spring.cloud.stream.kafka.streams.binder.functions.consumer.applicationId=consumer-id-foo-0",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put("value.serializer", JsonSerializer.class);
DefaultKafkaProducerFactory<Integer, Foo> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, Foo> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo-consumer-1");
Foo foo1 = new Foo();
foo1.setFoo("foo-1");
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("event_type", "foo".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord1 = new ProducerRecord<>("foo-consumer-1", 0, 56, foo1, headers);
template.send(producerRecord1);
Foo foo2 = new Foo();
foo2.setFoo("foo-2");
final ProducerRecord<Integer, Foo> producerRecord2 = new ProducerRecord<>("foo-consumer-1", 0, 57, foo2);
template.send(producerRecord2);
Foo foo3 = new Foo();
foo3.setFoo("foo-3");
final ProducerRecord<Integer, Foo> producerRecord3 = new ProducerRecord<>("foo-consumer-1", 0, 58, foo3, headers);
template.send(producerRecord3);
Foo foo4 = new Foo();
foo4.setFoo("foo-4");
Headers headers1 = new RecordHeaders();
headers1.add(new RecordHeader("event_type", "bar".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord4 = new ProducerRecord<>("foo-consumer-1", 0, 59, foo4, headers1);
template.send(producerRecord4);
Assert.isTrue(LATCH.await(10, TimeUnit.SECONDS), "Foo");
}
finally {
pf.destroy();
}
}
}
@EnableAutoConfiguration
public static class EventTypeRoutingTestConfig {
@Bean
public Function<KStream<Integer, Foo>, KStream<Integer, Foo>> process() {
return input -> input;
}
@Bean
public java.util.function.Consumer<KTable<Integer, Foo>> consumer() {
return ktable -> ktable.toStream().foreach((key, value) -> {
LATCH.countDown();
});
}
@Bean
public java.util.function.Consumer<GlobalKTable<Integer, Foo>> global() {
return ktable -> {
};
}
}
static class Foo {
String foo;
public String getFoo() {
return foo;
}
public void setFoo(String foo) {
this.foo = foo;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Foo foo1 = (Foo) o;
return Objects.equals(foo, foo1.foo);
}
@Override
public int hashCode() {
return Objects.hash(foo);
}
}
}

View File

@@ -0,0 +1,470 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiFunction;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.kstream.ForeachAction;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsFunctionCompositionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"fooFuncanotherFooFunc-out-0", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private static final CountDownLatch countDownLatch1 = new CountDownLatch(1);
private static final CountDownLatch countDownLatch2 = new CountDownLatch(1);
private static final CountDownLatch countDownLatch3 = new CountDownLatch(2);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("fn-composition-group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "fooFuncanotherFooFunc-out-0", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testBasicFunctionCompositionWithDefaultDestination() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig1.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=fooFunc|anotherFooFunc;anotherProcess",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("fooFuncanotherFooFunc-in-0");
template.sendDefault("foobar!!");
//Verify non-composed funcions can be run standalone with composed function chains, i.e foo|bar;buzz
template.setDefaultTopic("anotherProcess-in-0");
template.sendDefault("this is crazy!!!");
Thread.sleep(1000);
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "fooFuncanotherFooFunc-out-0");
assertThat(cr.value().contains("foobar!!")).isTrue();
Assert.isTrue(countDownLatch1.await(5, TimeUnit.SECONDS), "anotherProcess consumer didn't trigger.");
}
finally {
pf.destroy();
}
}
}
@Test
public void testBasicFunctionCompositionWithDestinaion() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig1.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=fooFunc|anotherFooFunc;anotherProcess",
"--spring.cloud.stream.bindings.fooFuncanotherFooFunc-in-0.destination=foo",
"--spring.cloud.stream.bindings.fooFuncanotherFooFunc-out-0.destination=bar",
"--spring.cloud.stream.bindings.anotherProcess-in-0.destination=buzz",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo");
template.sendDefault("foobar!!");
template.setDefaultTopic("buzz");
template.sendDefault("this is crazy!!!");
Thread.sleep(1000);
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("foobar!!")).isTrue();
Assert.isTrue(countDownLatch1.await(5, TimeUnit.SECONDS), "anotherProcess consumer didn't trigger.");
}
finally {
pf.destroy();
}
}
}
@Test
public void testFunctionToConsumerComposition() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig2.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=fooFunc|anotherProcess",
"--spring.cloud.stream.bindings.fooFuncanotherProcess-in-0.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo");
template.sendDefault("foobar!!");
Thread.sleep(1000);
Assert.isTrue(countDownLatch2.await(5, TimeUnit.SECONDS), "anotherProcess consumer didn't trigger.");
}
finally {
pf.destroy();
}
}
}
@Test
public void testBiFunctionToConsumerComposition() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig3.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=fooBiFunc|anotherProcess",
"--spring.cloud.stream.bindings.fooBiFuncanotherProcess-in-0.destination=foo",
"--spring.cloud.stream.bindings.fooBiFuncanotherProcess-in-1.destination=foo-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo");
template.sendDefault("foobar!!");
template.setDefaultTopic("foo-1");
template.sendDefault("another foobar!!");
Thread.sleep(1000);
Assert.isTrue(countDownLatch3.await(5, TimeUnit.SECONDS), "anotherProcess consumer didn't trigger.");
}
finally {
pf.destroy();
}
}
}
@Test
public void testChainedFunctionsAsComposed() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig4.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.applicationId=my-app-id",
"--spring.cloud.stream.function.definition=fooBiFunc|anotherFooFunc|yetAnotherFooFunc|lastFunctionInChain",
"--spring.cloud.stream.function.bindings.fooBiFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-0=input1",
"--spring.cloud.stream.function.bindings.fooBiFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-1=input2",
"--spring.cloud.stream.function.bindings.fooBiFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-out-0=output",
"--spring.cloud.stream.bindings.input1.destination=my-foo-1",
"--spring.cloud.stream.bindings.input2.destination=my-foo-2",
"--spring.cloud.stream.bindings.output.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("my-foo-2");
template.sendDefault("foo-1", "foo2");
template.setDefaultTopic("my-foo-1");
template.sendDefault("foo-1", "foo1");
Thread.sleep(1000);
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer);
assertThat(records.iterator().hasNext()).isTrue();
assertThat(records.iterator().next().value().equals("foo1foo2From-anotherFooFuncFrom-yetAnotherFooFuncFrom-lastFunctionInChain")).isTrue();
}
finally {
pf.destroy();
}
}
}
@Test
public void testFirstFunctionCurriedThenComposeWithOtherFunctions() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig5.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.applicationId=my-app-id-xyz",
"--spring.cloud.stream.function.definition=curriedFunc|anotherFooFunc|yetAnotherFooFunc|lastFunctionInChain",
"--spring.cloud.stream.function.bindings.curriedFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-0=input1",
"--spring.cloud.stream.function.bindings.curriedFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-in-1=input2",
"--spring.cloud.stream.function.bindings.curriedFuncanotherFooFuncyetAnotherFooFunclastFunctionInChain-out-0=output",
"--spring.cloud.stream.bindings.input1.destination=my-foo-1",
"--spring.cloud.stream.bindings.input2.destination=my-foo-2",
"--spring.cloud.stream.bindings.output.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("my-foo-2");
template.sendDefault("foo-1", "foo2");
Thread.sleep(1000);
template.setDefaultTopic("my-foo-1");
template.sendDefault("foo-1", "foo1");
Thread.sleep(1000);
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer);
assertThat(records.iterator().hasNext()).isTrue();
assertThat(records.iterator().next().value().equals("foo1foo2From-anotherFooFuncFrom-yetAnotherFooFuncFrom-lastFunctionInChain")).isTrue();
}
finally {
pf.destroy();
}
}
}
@Test
public void testFunctionToConsumerCompositionWithFunctionProducesKTable() throws InterruptedException {
SpringApplication app = new SpringApplication(FunctionCompositionConfig6.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=fooFunc|anotherProcess",
"--spring.cloud.stream.bindings.fooFuncanotherProcess-in-0.destination=foo",
"--spring.cloud.stream.bindings.fooFuncanotherProcess-out-0.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo");
template.sendDefault("foo", "foobar!!");
Thread.sleep(1000);
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("foobar!!")).isTrue();
}
finally {
pf.destroy();
}
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig1 {
@Bean
public Function<KStream<String, String>, KStream<String, String>> fooFunc() {
return input -> input.peek((s, s2) -> {
System.out.println("hello: " + s2);
});
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> anotherFooFunc() {
return input -> input.peek((s, s2) -> System.out.println("hello Foo: " + s2));
}
@Bean
public java.util.function.Consumer<KStream<String, String>> anotherProcess() {
return c -> c.foreach((s, s2) -> {
System.out.println("s2s2s2::" + s2);
countDownLatch1.countDown();
});
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig2 {
@Bean
public Function<KStream<String, String>, KStream<String, String>> fooFunc() {
return input -> input.peek((s, s2) -> {
System.out.println("hello: " + s2);
});
}
@Bean
public java.util.function.Consumer<KStream<String, String>> anotherProcess() {
return c -> c.foreach((s, s2) -> {
System.out.println("s2s2s2::" + s2);
countDownLatch2.countDown();
});
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig3 {
@Bean
public BiFunction<KStream<String, String>, KStream<String, String>, KStream<String, String>> fooBiFunc() {
return KStream::merge;
}
@Bean
public java.util.function.Consumer<KStream<String, String>> anotherProcess() {
return c -> c.foreach((s, s2) -> {
System.out.println("s2s2s2::" + s2);
countDownLatch3.countDown();
});
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig4 {
@Bean
public BiFunction<KStream<String, String>, KTable<String, String>, KStream<String, String>> fooBiFunc() {
return (a, b) -> a.join(b, (value1, value2) -> value1 + value2);
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> anotherFooFunc() {
return input -> input.mapValues(value -> value + "From-anotherFooFunc");
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> yetAnotherFooFunc() {
return input -> input.mapValues(value -> value + "From-yetAnotherFooFunc");
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> lastFunctionInChain() {
return input -> input.mapValues(value -> value + "From-lastFunctionInChain");
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig5 {
@Bean
public Function<KStream<String, String>, Function<KTable<String, String>, KStream<String, String>>> curriedFunc() {
return a -> b ->
a.join(b, (value1, value2) -> value1 + value2);
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> anotherFooFunc() {
return input -> input.mapValues(value -> value + "From-anotherFooFunc");
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> yetAnotherFooFunc() {
return input -> input.mapValues(value -> value + "From-yetAnotherFooFunc");
}
@Bean
public Function<KStream<String, String>, KStream<String, String>> lastFunctionInChain() {
return input -> input.mapValues(value -> value + "From-lastFunctionInChain");
}
}
@EnableAutoConfiguration
public static class FunctionCompositionConfig6 {
@Bean
public Function<KStream<String, String>, KTable<String, String>> fooFunc() {
return ks -> {
ks.foreach(new ForeachAction<String, String>() {
@Override
public void apply(String key, String value) {
System.out.println();
}
});
return ks.toTable();
};
}
@Bean
public Function<KTable<String, String>, KStream<String, String>> anotherProcess() {
return KTable::toStream;
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2019 the original author or authors.
* Copyright 2017-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -14,36 +14,46 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.mockito.Mockito;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -51,13 +61,15 @@ import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.mockito.internal.verification.VerificationModeFactory.times;
/**
* @author Soby Chacko
* @author Gary Russell
* @author Nico Pommerening
*/
public class KafkaStreamsInteractiveQueryIntegrationTests {
@@ -87,11 +99,71 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
public void testStateStoreRetrievalRetry() {
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
kafkaStreamsRegistry.registerKafkaStreams(mock);
Mockito.when(mock.isRunning()).thenReturn(true);
Properties mockProperties = new Properties();
mockProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "fooApp");
Mockito.when(mock.getStreamsConfiguration()).thenReturn(mockProperties);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
InteractiveQueryService interactiveQueryService = new InteractiveQueryService(kafkaStreamsRegistry,
binderConfigurationProperties);
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
try {
interactiveQueryService.getQueryableStore("foo", storeType);
}
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3))
.store(StoreQueryParameters.fromNameAndType("foo", storeType));
}
@Test
public void testStateStoreRetrievalRetryForHostInfoService() {
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
kafkaStreamsRegistry.registerKafkaStreams(mock);
Mockito.when(mock.isRunning()).thenReturn(true);
Properties mockProperties = new Properties();
mockProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "foobarApp-123");
Mockito.when(mock.getStreamsConfiguration()).thenReturn(mockProperties);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
InteractiveQueryService interactiveQueryService = new InteractiveQueryService(kafkaStreamsRegistry,
binderConfigurationProperties);
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
final StringSerializer serializer = new StringSerializer();
try {
interactiveQueryService.getHostInfo("foo", "foobarApp-key", serializer);
}
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3))
.queryMetadataForKey("foo", "foobarApp-key", serializer);
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.function.bindings.process-out-0=output",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -103,9 +175,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidateFoo(context);
}
@@ -114,8 +184,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
@@ -134,31 +203,48 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
InteractiveQueryService interactiveQueryService = context
.getBean(InteractiveQueryService.class);
HostInfo currentHostInfo = interactiveQueryService.getCurrentHostInfo();
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
final KeyQueryMetadata keyQueryMetadata = interactiveQueryService.getKeyQueryMetadata("prod-id-count-store",
123, new IntegerSerializer());
final HostInfo activeHost = keyQueryMetadata.getActiveHost();
assertThat(activeHost.host() + ":" + activeHost.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
final KafkaStreams kafkaStreams = interactiveQueryService.getKafkaStreams("prod-id-count-store",
123, new IntegerSerializer());
assertThat(kafkaStreams).isNotNull();
assertThat(interactiveQueryService.getKafkaStreams("non-existent-store",
123, new IntegerSerializer())).isNull();
HostInfo hostInfo = interactiveQueryService.getHostInfo("prod-id-count-store",
123, new IntegerSerializer());
assertThat(hostInfo.host() + ":" + hostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfoFoo = interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
assertThat(hostInfoFoo).isNull();
assertThatThrownBy(() -> interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer()))
.isInstanceOf(IllegalStateException.class)
.hasMessageContaining("Error when retrieving state store.");
final List<HostInfo> hostInfos = interactiveQueryService.getAllHostsInfo("prod-id-count-store");
assertThat(hostInfos.size()).isEqualTo(1);
final HostInfo hostInfo1 = hostInfos.get(0);
assertThat(hostInfo1.host() + ":" + hostInfo1.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
@SuppressWarnings("deprecation")
public KStream<?, String> process(KStream<Object, Product> input) {
@Bean
public Function<KStream<Object, Product>, KStream<?, String>> process() {
return input.filter((key, product) -> product.getId() == 123)
return input -> input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(),
.groupByKey(Grouped.with(new Serdes.IntegerSerde(),
new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(null,
@@ -170,6 +256,11 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
return new Foo(interactiveQueryService);
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
static class Foo {
InteractiveQueryService interactiveQueryService;
@@ -187,7 +278,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
}
}
static class Product {

View File

@@ -0,0 +1,248 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Field;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import org.springframework.util.ReflectionUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class MultipleFunctionsInSameAppTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"coffee", "electronics");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private static CountDownLatch countDownLatch = new CountDownLatch(2);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("purchase-groups", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "coffee", "electronics");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
@SuppressWarnings("unchecked")
public void testMultiFunctionsInSameApp() throws InterruptedException {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze;anotherProcess;yetAnotherProcess",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.applicationId=analyze-id-0",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.bindings.analyze-in-0.consumer.concurrency=1",
"--spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads=3",
"--spring.cloud.stream.kafka.streams.binder.functions.process.configuration.client.id=process-client",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.configuration.client.id=analyze-client",
"--spring.cloud.stream.kafka.streams.binder.functions.anotherProcess.configuration.client.id=anotherProcess-client",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean anotherProcessStreamsBuilderFactoryBean = context
.getBean("&stream-builder-anotherProcess", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties anotherProcessStreamsConfiguration = anotherProcessStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
Integer concurrency = (Integer) processStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
concurrency = (Integer) analyzeStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(1);
assertThat(anotherProcessStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG)).isEqualTo("3");
final KafkaStreamsBindingInformationCatalogue catalogue = context.getBean(KafkaStreamsBindingInformationCatalogue.class);
Field field = ReflectionUtils.findField(KafkaStreamsBindingInformationCatalogue.class, "outboundKStreamResolvables", Map.class);
ReflectionUtils.makeAccessible(field);
final Map<Object, ResolvableType> outboundKStreamResolvables = (Map<Object, ResolvableType>) ReflectionUtils.getField(field, catalogue);
// Since we have 2 functions with return types -- one is an array return type with 2 bindings -- assert that
// the catalogue contains outbound type information for all the 3 different bindings.
assertThat(outboundKStreamResolvables.size()).isEqualTo(3);
}
}
@Test
public void testMultiFunctionsInSameAppWithMultiBinders() throws Exception {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.startOffset=latest",
"--spring.cloud.stream.bindings.process-in-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.process-out-1.binder=kafka1",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-0.binder=kafka2",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-1.binder=kafka2",
"--spring.cloud.stream.bindings.analyze-in-0.consumer.concurrency=2",
"--spring.cloud.stream.binders.kafka1.type=kstream",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-1",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=process-client",
"--spring.cloud.stream.binders.kafka2.type=kstream",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-2",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=analyze-client")) {
Thread.sleep(1000);
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-1");
assertThat(analyzeStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-2");
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
Integer concurrency = (Integer) analyzeStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
concurrency = (Integer) processStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isNull(); //thus default to 1 by Kafka Streams.
}
}
private void receiveAndValidate(String in, String... out) throws InterruptedException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic(in);
template.sendDefault("coffee");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out[0]);
assertThat(cr.value().contains("coffee")).isTrue();
template.sendDefault("electronics");
cr = KafkaTestUtils.getSingleRecord(consumer, out[1]);
assertThat(cr.value().contains("electronics")).isTrue();
Assert.isTrue(countDownLatch.await(5, TimeUnit.SECONDS), "Analyze (BiConsumer) method didn't receive all the expected records");
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class MultipleFunctionsInSameApp {
@Bean
public Function<KStream<String, String>, KStream<String, String>[]> process() {
return input -> input.branch(
(s, p) -> p.equalsIgnoreCase("coffee"),
(s, p) -> p.equalsIgnoreCase("electronics"));
}
@Bean
public Function<KStream<String, String>, KStream<String, Long>> yetAnotherProcess() {
return input -> input.map((k, v) -> new KeyValue<>("foo", 1L));
}
@Bean
public BiConsumer<KStream<String, String>, KStream<String, String>> analyze() {
return (coffee, electronics) -> {
coffee.foreach((s, p) -> countDownLatch.countDown());
electronics.foreach((s, p) -> countDownLatch.countDown());
};
}
@Bean
public java.util.function.Consumer<KStream<String, String>> anotherProcess() {
return c -> {
};
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,42 +16,77 @@
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import java.util.Map;
import java.util.Properties;
import java.util.function.Consumer;
import com.fasterxml.jackson.databind.JavaType;
import com.fasterxml.jackson.databind.type.TypeFactory;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.security.JaasUtils;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.KeyValueSerdeResolver;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
/**
* @author Soby Chacko
* @author Eduard Domínguez
*/
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Before
public void before() {
System.clearProperty(JaasUtils.JAVA_LOGIN_CONFIG_PARAM);
}
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
public void testKStreamBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.bindings.input.binder=kBind1",
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment"
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=input1;input2;input3",
"--spring.cloud.stream.kafka.streams.bindings.input1-in-0.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foobar",
"--spring.cloud.stream.bindings.input1-in-0.destination=foo",
"--spring.cloud.stream.bindings.input1-in-0.binder=kstreamBinder",
"--spring.cloud.stream.binders.kstreamBinder.type=kstream",
"--spring.cloud.stream.binders.kstreamBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring"
+ ".cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input2-in-0.destination=bar",
"--spring.cloud.stream.bindings.input2-in-0.binder=ktableBinder",
"--spring.cloud.stream.binders.ktableBinder.type=ktable",
"--spring.cloud.stream.binders.ktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input3-in-0.destination=foobar",
"--spring.cloud.stream.bindings.input3-in-0.binder=globalktableBinder",
"--spring.cloud.stream.binders.globalktableBinder.type=globalktable",
"--spring.cloud.stream.binders.globalktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@@ -59,34 +94,97 @@ public class KafkaStreamsBinderBootstrapTest {
@Test
public void testKafkaStreamsBinderWithStandardConfigurationCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=input1;input2;input3",
"--spring.cloud.stream.kafka.streams.bindings.input1-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@Test
@SuppressWarnings("unchecked")
public void testStreamConfigGlobalProperties_GH1149() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=input1;input2;input3",
"--spring.cloud.stream.kafka.streams.bindings.input1-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.configuration.spring.json.value.type.method=" + this.getClass().getName() + ".determineType",
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
// Make sure that global stream configs do not contain individual binding config set on second function.
assertThat(streamConfigGlobalProperties.containsKey("spring.json.value.type.method")).isFalse();
// Make sure that only input2 function gets the specific binding property set on it.
final StreamsBuilderFactoryBean input1SBFB = applicationContext.getBean("&stream-builder-input1", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration1 = input1SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration1.containsKey("spring.json.value.type.method")).isFalse();
final StreamsBuilderFactoryBean input2SBFB = applicationContext.getBean("&stream-builder-input2", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration2 = input2SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration2.containsKey("spring.json.value.type.method")).isTrue();
final StreamsBuilderFactoryBean input3SBFB = applicationContext.getBean("&stream-builder-input3", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration3 = input3SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration3.containsKey("spring.json.value.type.method")).isFalse();
applicationContext.getBean(KeyValueSerdeResolver.class);
String configuredSerdeTypeResolver = (String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeResolver.arg$2");
assertThat(this.getClass().getName() + ".determineType").isEqualTo(configuredSerdeTypeResolver);
String configuredKeyDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].keyDeserializer.typeMapper.classIdFieldName"));
assertThat(DefaultJackson2JavaTypeMapper.KEY_DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredKeyDeserializerFieldName);
String configuredValueDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeMapper.classIdFieldName"));
assertThat(DefaultJackson2JavaTypeMapper.DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredValueDeserializerFieldName);
applicationContext.close();
}
public static JavaType determineType(byte[] data, Headers headers) {
return TypeFactory.defaultInstance().constructParametricType(Map.class, String.class, String.class);
}
@SpringBootApplication
@EnableBinding(StreamSourceProcessor.class)
static class SimpleApplication {
@StreamListener
public void handle(@Input("input") KStream<Object, String> stream) {
static class SimpleKafkaStreamsApplication {
@Bean
public Consumer<KStream<Object, String>> input1() {
return s -> {
// No-op consumer
};
}
@Bean
public Consumer<KTable<Map<String, String>, Map<String, String>>> input2() {
return s -> {
// No-op consumer
};
}
@Bean
public Consumer<GlobalKTable<Object, String>> input3() {
return s -> {
// No-op consumer
};
}
}
interface StreamSourceProcessor {
@Input("input")
KStream<?, ?> inputStream();
}
}

View File

@@ -0,0 +1,93 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import java.util.function.Consumer;
import javax.security.auth.login.AppConfigurationEntry;
import org.apache.kafka.common.security.JaasUtils;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderJaasInitTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
private static String JAVA_LOGIN_CONFIG_PARAM_VALUE;
@BeforeClass
public static void beforeAll() {
JAVA_LOGIN_CONFIG_PARAM_VALUE = System.getProperty(JaasUtils.JAVA_LOGIN_CONFIG_PARAM);
System.clearProperty(JaasUtils.JAVA_LOGIN_CONFIG_PARAM);
}
@AfterClass
public static void afterAll() {
if (JAVA_LOGIN_CONFIG_PARAM_VALUE != null) {
System.setProperty(JaasUtils.JAVA_LOGIN_CONFIG_PARAM, JAVA_LOGIN_CONFIG_PARAM_VALUE);
}
}
@Test
public void testKafkaStreamsBinderJaasInitialization() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
KafkaStreamsBinderJaasInitTestsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=foo",
"--spring.cloud.stream.kafka.streams.bindings.foo-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderJaasInitialization-jaas-id",
"--spring.cloud.stream.kafka.streams.binder.jaas.loginModule=org.apache.kafka.common.security.plain.PlainLoginModule",
"--spring.cloud.stream.kafka.streams.binder.jaas.options.username=foo",
"--spring.cloud.stream.kafka.streams.binder.jaas.options.password=bar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
javax.security.auth.login.Configuration configuration = javax.security.auth.login.Configuration
.getConfiguration();
final AppConfigurationEntry[] kafkaConfiguration = configuration
.getAppConfigurationEntry("KafkaClient");
assertThat(kafkaConfiguration).hasSize(1);
assertThat(kafkaConfiguration[0].getOptions().get("username")).isEqualTo("foo");
assertThat(kafkaConfiguration[0].getOptions().get("password")).isEqualTo("bar");
assertThat(kafkaConfiguration[0].getControlFlag())
.isEqualTo(AppConfigurationEntry.LoginModuleControlFlag.REQUIRED);
applicationContext.close();
}
@SpringBootApplication
static class KafkaStreamsBinderJaasInitTestsApplication {
@Bean
public Consumer<KStream<Object, String>> foo() {
return s -> {
// No-op consumer
};
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -37,11 +38,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -53,6 +49,9 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
@@ -84,25 +83,23 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.function.bindings.process-out-0=output1",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.function.bindings.process-out-1=output2",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.function.bindings.process-out-2=output3",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId" +
"--spring.cloud.stream.kafka.streams.binder.applicationId" +
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
@@ -182,43 +179,35 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings("unchecked")
@SuppressWarnings({"unchecked"})
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,40 +16,58 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.StreamsBuilderFactoryManager;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.Lifecycle;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
@@ -57,20 +75,23 @@ public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
"counts", "counts-1", "counts-2", "counts-5", "counts-6");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private final static CountDownLatch LATCH = new CountDownLatch(1);
@BeforeClass
public static void setUp() throws Exception {
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2", "counts-5", "counts-6");
}
@AfterClass
@@ -79,34 +100,236 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
@Test
public void testKstreamWordCountFunction() throws Exception {
@SuppressWarnings("unchecked")
public void testBasicKStreamTopologyExecution() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.process-out-0.destination=counts",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunction",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.consumerProperties.request.timeout.ms=29000", //for testing ...binder.consumerProperties
"--spring.cloud.stream.kafka.streams.binder.consumerProperties.consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer",
"--spring.cloud.stream.kafka.streams.binder.producerProperties.max.block.ms=90000", //for testing ...binder.producerProperties
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.consumedAs=custom-consumer",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.producedAs=custom-producer",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
final MeterRegistry meterRegistry = context.getBean(MeterRegistry.class);
Thread.sleep(100);
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.stream.thread.poll.records.max"))).isTrue();
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.consumer.network.io.total"))).isTrue();
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.producer.record.send.total"))).isTrue();
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.admin.client.network.io.total"))).isTrue();
Assert.isTrue(LATCH.await(5, TimeUnit.SECONDS), "Failed to call customizers");
//Testing topology endpoint
final KafkaStreamsRegistry kafkaStreamsRegistry = context.getBean(KafkaStreamsRegistry.class);
final KafkaStreamsTopologyEndpoint kafkaStreamsTopologyEndpoint = new KafkaStreamsTopologyEndpoint(kafkaStreamsRegistry);
final List<String> topologies = kafkaStreamsTopologyEndpoint.kafkaStreamsTopologies();
final String topology1 = topologies.get(0);
final String topology2 = kafkaStreamsTopologyEndpoint.kafkaStreamsTopology("testKstreamWordCountFunction");
assertThat(topology1).isNotEmpty();
assertThat(topology1).isEqualTo(topology2);
assertThat(topology1.contains("Source: custom-consumer")).isTrue();
assertThat(topology1.contains("Sink: custom-producer")).isTrue();
//verify that ...binder.consumerProperties and ...binder.producerProperties work.
Map<String, Object> streamConfigGlobalProperties = (Map<String, Object>) context.getBean("streamConfigGlobalProperties");
assertThat(streamConfigGlobalProperties.get("consumer.request.timeout.ms")).isEqualTo("29000");
assertThat(streamConfigGlobalProperties.get("consumer.value.deserializer")).isEqualTo("org.apache.kafka.common.serialization.StringDeserializer");
assertThat(streamConfigGlobalProperties.get("producer.max.block.ms")).isEqualTo("90000");
InputBindingLifecycle inputBindingLifecycle = context.getBean(InputBindingLifecycle.class);
final Collection<Binding<Object>> inputBindings = (Collection<Binding<Object>>) new DirectFieldAccessor(inputBindingLifecycle)
.getPropertyValue("inputBindings");
assertThat(inputBindings).isNotNull();
final Optional<Binding<Object>> theOnlyInputBinding = inputBindings.stream().findFirst();
assertThat(theOnlyInputBinding.isPresent()).isTrue();
final DefaultBinding<Object> objectBinding = (DefaultBinding<Object>) theOnlyInputBinding.get();
assertThat(objectBinding.getBindingName()).isEqualTo("process-in-0");
final Lifecycle lifecycle = (Lifecycle) new DirectFieldAccessor(objectBinding).getPropertyValue("lifecycle");
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
assertThat(lifecycle).isEqualTo(streamsBuilderFactoryBean);
OutputBindingLifecycle outputBindingLifecycle = context.getBean(OutputBindingLifecycle.class);
final Collection<Binding<Object>> outputBindings = (Collection<Binding<Object>>) new DirectFieldAccessor(outputBindingLifecycle)
.getPropertyValue("outputBindings");
assertThat(outputBindings).isNotNull();
final Optional<Binding<Object>> theOnlyOutputBinding = outputBindings.stream().findFirst();
assertThat(theOnlyOutputBinding.isPresent()).isTrue();
final DefaultBinding<Object> objectBinding1 = (DefaultBinding<Object>) theOnlyOutputBinding.get();
assertThat(objectBinding1.getBindingName()).isEqualTo("process-out-0");
final Lifecycle lifecycle1 = (Lifecycle) new DirectFieldAccessor(objectBinding1).getPropertyValue("lifecycle");
assertThat(lifecycle1).isEqualTo(streamsBuilderFactoryBean);
}
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.bindings.process-in-0.destination=words-5",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-5",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-5", "counts-5");
}
}
@Test
public void testKstreamWordCountFunctionWithCustomProducerStreamPartitioner() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunctionWithCustomProducerStreamPartitioner",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.streamPartitionerBeanName" +
"=streamPartitioner",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-2");
template.sendDefault("foo");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"foo\",\"count\":1")).isTrue();
assertThat(cr.partition() == 0) .isTrue();
template.sendDefault("bar");
cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"bar\",\"count\":1")).isTrue();
assertThat(cr.partition() == 1) .isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
@Test
public void testKstreamBinderAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.kafka.streams.auto-startup=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-3",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-3",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryManager streamsBuilderFactoryManager = context.getBean(StreamsBuilderFactoryManager.class);
assertThat(streamsBuilderFactoryManager.isAutoStartup()).isFalse();
assertThat(streamsBuilderFactoryManager.isRunning()).isFalse();
}
}
@Test
public void testKstreamIndividualBindingAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-4",
"--spring.cloud.stream.bindings.process-in-0.consumer.auto-startup=false",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-4",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean.isRunning()).isFalse();
streamsBuilderFactoryBean.start();
assertThat(streamsBuilderFactoryBean.isRunning()).isTrue();
}
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-6",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-6",
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-6");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-6");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic(in);
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out);
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
@@ -164,23 +387,62 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
public static class WordCountProcessorApplication {
@Autowired
InteractiveQueryService interactiveQueryService;
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean
public StreamsBuilderFactoryBeanConfigurer customizer() {
return fb -> {
try {
fb.setStateListener((newState, oldState) -> {
});
fb.getObject(); //make sure no exception is thrown at this call.
KafkaStreamsBinderWordCountFunctionTests.LATCH.countDown();
}
catch (Exception e) {
//Nothing to do - When the exception is thrown above, the latch won't be count down.
}
};
}
@Bean
public StreamPartitioner<String, WordCount> streamPartitioner() {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
@EnableAutoConfiguration
static class OutboundNullApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
return input -> input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foobar-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -0,0 +1,347 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.stereotype.Component;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KafkaStreamsComponentBeansTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"testFunctionComponent-out", "testBiFunctionComponent-out", "testCurriedFunctionWithFunctionTerminal-out");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer1;
private static Consumer<String, String> consumer2;
private static Consumer<String, String> consumer3;
private final static CountDownLatch LATCH_1 = new CountDownLatch(1);
private final static CountDownLatch LATCH_2 = new CountDownLatch(2);
private final static CountDownLatch LATCH_3 = new CountDownLatch(3);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer1 = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "testFunctionComponent-out");
Map<String, Object> consumerProps1 = KafkaTestUtils.consumerProps("group-x", "false",
embeddedKafka);
consumerProps1.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps1.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf1 = new DefaultKafkaConsumerFactory<>(consumerProps1);
consumer2 = cf1.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer2, "testBiFunctionComponent-out");
Map<String, Object> consumerProps2 = KafkaTestUtils.consumerProps("group-y", "false",
embeddedKafka);
consumerProps2.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps2.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf2 = new DefaultKafkaConsumerFactory<>(consumerProps2);
consumer3 = cf2.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer3, "testCurriedFunctionWithFunctionTerminal-out");
}
@AfterClass
public static void tearDown() {
consumer1.close();
consumer2.close();
consumer3.close();
}
@Test
public void testFunctionComponent() {
SpringApplication app = new SpringApplication(FunctionAsComponent.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.foo-in-0.destination=testFunctionComponent-in",
"--spring.cloud.stream.bindings.foo-out-0.destination=testFunctionComponent-out",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testFunctionComponent-in");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1, "testFunctionComponent-out");
assertThat(cr.value().contains("foobarfoobar")).isTrue();
}
finally {
pf.destroy();
}
}
}
@Test
public void testConsumerComponent() throws Exception {
SpringApplication app = new SpringApplication(ConsumerAsComponent.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.bar-in-0.destination=testConsumerComponent-in",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testConsumerComponent-in");
template.sendDefault("foobar");
Assert.isTrue(LATCH_1.await(10, TimeUnit.SECONDS), "bar");
}
finally {
pf.destroy();
}
}
}
@Test
public void testBiFunctionComponent() {
SpringApplication app = new SpringApplication(BiFunctionAsComponent.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.bazz-in-0.destination=testBiFunctionComponent-in-0",
"--spring.cloud.stream.bindings.bazz-in-1.destination=testBiFunctionComponent-in-1",
"--spring.cloud.stream.bindings.bazz-out-0.destination=testBiFunctionComponent-out",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testBiFunctionComponent-in-0");
template.sendDefault("foobar");
template.setDefaultTopic("testBiFunctionComponent-in-1");
template.sendDefault("foobar");
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer2, 10_000, 2);
assertThat(records.count()).isEqualTo(2);
records.forEach(stringStringConsumerRecord -> assertThat(stringStringConsumerRecord.value().contains("foobar")).isTrue());
}
finally {
pf.destroy();
}
}
}
@Test
public void testBiConsumerComponent() throws Exception {
SpringApplication app = new SpringApplication(BiConsumerAsComponent.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.buzz-in-0.destination=testBiConsumerComponent-in-0",
"--spring.cloud.stream.bindings.buzz-in-1.destination=testBiConsumerComponent-in-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testBiConsumerComponent-in-0");
template.sendDefault("foobar");
template.setDefaultTopic("testBiConsumerComponent-in-1");
template.sendDefault("foobar");
Assert.isTrue(LATCH_2.await(10, TimeUnit.SECONDS), "bar");
}
finally {
pf.destroy();
}
}
}
@Test
public void testCurriedFunctionWithConsumerTerminal() throws Exception {
SpringApplication app = new SpringApplication(CurriedFunctionWithConsumerTerminal.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.curriedConsumer-in-0.destination=testCurriedFunctionWithConsumerTerminal-in-0",
"--spring.cloud.stream.bindings.curriedConsumer-in-1.destination=testCurriedFunctionWithConsumerTerminal-in-1",
"--spring.cloud.stream.bindings.curriedConsumer-in-2.destination=testCurriedFunctionWithConsumerTerminal-in-2",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-0");
template.sendDefault("foobar");
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-1");
template.sendDefault("foobar");
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-2");
template.sendDefault("foobar");
Assert.isTrue(LATCH_3.await(10, TimeUnit.SECONDS), "bar");
}
finally {
pf.destroy();
}
}
}
@Test
public void testCurriedFunctionWithFunctionTerminal() {
SpringApplication app = new SpringApplication(CurriedFunctionWithFunctionTerminal.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.curriedFunction-in-0.destination=testCurriedFunctionWithFunctionTerminal-in-0",
"--spring.cloud.stream.bindings.curriedFunction-in-1.destination=testCurriedFunctionWithFunctionTerminal-in-1",
"--spring.cloud.stream.bindings.curriedFunction-in-2.destination=testCurriedFunctionWithFunctionTerminal-in-2",
"--spring.cloud.stream.bindings.curriedFunction-out-0.destination=testCurriedFunctionWithFunctionTerminal-out",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-0");
template.sendDefault("foobar");
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-1");
template.sendDefault("foobar");
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-2");
template.sendDefault("foobar");
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer3, 10_000, 3);
assertThat(records.count()).isEqualTo(3);
records.forEach(stringStringConsumerRecord -> assertThat(stringStringConsumerRecord.value().contains("foobar")).isTrue());
}
finally {
pf.destroy();
}
}
}
@Component("foo")
@EnableAutoConfiguration
public static class FunctionAsComponent implements Function<KStream<Integer, String>,
KStream<String, String>> {
@Override
public KStream<String, String> apply(KStream<Integer, String> stringIntegerKStream) {
return stringIntegerKStream.map((integer, s) -> new KeyValue<>(s, s + s));
}
}
@Component("bar")
@EnableAutoConfiguration
public static class ConsumerAsComponent implements java.util.function.Consumer<KStream<Integer, String>> {
@Override
public void accept(KStream<Integer, String> integerStringKStream) {
integerStringKStream.foreach((integer, s) -> LATCH_1.countDown());
}
}
@Component("bazz")
@EnableAutoConfiguration
public static class BiFunctionAsComponent implements BiFunction<KStream<String, String>, KStream<String, String>, KStream<String, String>> {
@Override
public KStream<String, String> apply(KStream<String, String> stringStringKStream, KStream<String, String> stringStringKStream2) {
return stringStringKStream.merge(stringStringKStream2);
}
}
@Component("buzz")
@EnableAutoConfiguration
public static class BiConsumerAsComponent implements BiConsumer<KStream<String, String>, KStream<String, String>> {
@Override
public void accept(KStream<String, String> stringStringKStream, KStream<String, String> stringStringKStream2) {
final KStream<String, String> merged = stringStringKStream.merge(stringStringKStream2);
merged.foreach((s, s2) -> LATCH_2.countDown());
}
}
@Component("curriedConsumer")
@EnableAutoConfiguration
public static class CurriedFunctionWithConsumerTerminal implements Function<KStream<String, String>,
Function<KStream<String, String>,
java.util.function.Consumer<KStream<String, String>>>> {
@Override
public Function<KStream<String, String>, java.util.function.Consumer<KStream<String, String>>> apply(KStream<String, String> stringStringKStream) {
return stringStringKStream1 -> stringStringKStream2 -> {
final KStream<String, String> merge1 = stringStringKStream.merge(stringStringKStream1);
final KStream<String, String> merged2 = merge1.merge(stringStringKStream2);
merged2.foreach((s1, s2) -> LATCH_3.countDown());
};
}
}
@Component("curriedFunction")
@EnableAutoConfiguration
public static class CurriedFunctionWithFunctionTerminal implements Function<KStream<String, String>,
Function<KStream<String, String>,
java.util.function.Function<KStream<String, String>, KStream<String, String>>>> {
@Override
public Function<KStream<String, String>, Function<KStream<String, String>, KStream<String, String>>> apply(KStream<String, String> stringStringKStream) {
return stringStringKStream1 -> stringStringKStream2 -> {
final KStream<String, String> merge1 = stringStringKStream.merge(stringStringKStream1);
return merge1.merge(stringStringKStream2);
};
}
}
}

View File

@@ -16,10 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.ProcessorSupplier;
@@ -33,8 +35,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -60,8 +60,11 @@ public class KafkaStreamsFunctionStateStoreTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.function.definition=biConsumerBean;hello",
"--spring.cloud.stream.bindings.biConsumerBean-in-0.destination=words",
"--spring.cloud.stream.bindings.hello-in-0.destination=words",
"--spring.cloud.stream.kafka.streams.binder.functions.changed.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-123",
"--spring.cloud.stream.kafka.streams.binder.functions.hello.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-456",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -78,38 +81,50 @@ public class KafkaStreamsFunctionStateStoreTests {
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
template.sendDefault(1, "foobar");
Thread.sleep(2000L);
StateStoreTestApplication processorApplication = context
.getBean(StateStoreTestApplication.class);
KeyValueStore<Long, Long> state1 = processorApplication.state1;
assertThat(processorApplication.processed).isTrue();
assertThat(processorApplication.processed1).isTrue();
assertThat(state1 != null).isTrue();
assertThat(state1.name()).isEqualTo("my-store");
WindowStore<Long, Long> state2 = processorApplication.state2;
assertThat(state2 != null).isTrue();
assertThat(state2.name()).isEqualTo("other-store");
assertThat(state2.persistent()).isTrue();
KeyValueStore<Long, Long> state3 = processorApplication.state1;
assertThat(processorApplication.processed2).isTrue();
assertThat(state3 != null).isTrue();
assertThat(state3.name()).isEqualTo("my-store");
WindowStore<Long, Long> state4 = processorApplication.state2;
assertThat(state4 != null).isTrue();
assertThat(state4.name()).isEqualTo("other-store");
assertThat(state4.persistent()).isTrue();
}
finally {
pf.destroy();
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
static class StateStoreTestApplication {
public static class StateStoreTestApplication {
KeyValueStore<Long, Long> state1;
WindowStore<Long, Long> state2;
boolean processed;
KeyValueStore<Long, Long> state3;
WindowStore<Long, Long> state4;
@Bean
public java.util.function.Consumer<KStream<Object, String>> process() {
return input ->
input.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
boolean processed1;
boolean processed2;
@Bean(name = "biConsumerBean")
public java.util.function.BiConsumer<KStream<Object, String>, KStream<Object, String>> process() {
return (input0, input1) ->
input0.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
@@ -119,21 +134,40 @@ public class KafkaStreamsFunctionStateStoreTests {
@Override
public void process(Object key, String value) {
processed = true;
processed1 = true;
}
@Override
public void close() {
if (state1 != null) {
state1.close();
}
if (state2 != null) {
state2.close();
}
}
}, "my-store", "other-store");
}
@Bean
public java.util.function.Consumer<KTable<Object, String>> hello() {
return input -> {
input.toStream().process(() -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
state3 = (KeyValueStore<Long, Long>) context.getStateStore("my-store");
state4 = (WindowStore<Long, Long>) context.getStateStore("other-store");
}
@Override
public void process(Object key, String value) {
processed2 = true;
}
@Override
public void close() {
}
}, "my-store", "other-store");
};
}
@Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
@@ -145,14 +179,9 @@ public class KafkaStreamsFunctionStateStoreTests {
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
1L, 3, 3L, false), Serdes.Long(),
Duration.ofSeconds(3), Duration.ofSeconds(3), false), Serdes.Long(),
Serdes.Long());
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
}
}

View File

@@ -0,0 +1,216 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.NoSuchBeanDefinitionException;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.StreamRetryTemplate;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Lazy;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import static org.assertj.core.api.AssertionsForClassTypes.assertThatThrownBy;
import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat;
public class KafkaStreamsRetryTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true);
private static final EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private final static CountDownLatch LATCH1 = new CountDownLatch(2);
private final static CountDownLatch LATCH2 = new CountDownLatch(4);
@Test
public void testRetryTemplatePerBindingOnKStream() throws Exception {
SpringApplication app = new SpringApplication(RetryTemplatePerConsumerBindingApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.function.definition=process",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.process-in-0.consumer.max-attempts=2",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testRetryTemplatePerBindingOnKStream",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
sendAndValidate(LATCH1);
}
}
@Test
public void testRetryTemplateOnTableTypes() throws Exception {
SpringApplication app = new SpringApplication(RetryTemplatePerConsumerBindingApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.function.definition=tableTypes",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testRetryTemplateOnTableTypes",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
assertThat(context.getBean("tableTypes-in-0-RetryTemplate", RetryTemplate.class)).isNotNull();
assertThat(context.getBean("tableTypes-in-1-RetryTemplate", RetryTemplate.class)).isNotNull();
}
}
@Test
public void testRetryTemplateBeanProvidedByTheApp() throws Exception {
SpringApplication app = new SpringApplication(CustomRetryTemplateApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.function.definition=process",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.process-in-0.consumer.retry-template-name=fooRetryTemplate",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testRetryTemplateBeanProvidedByTheApp",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
sendAndValidate(LATCH2);
assertThatThrownBy(() -> context.getBean("process-in-0-RetryTemplate", RetryTemplate.class)).isInstanceOf(NoSuchBeanDefinitionException.class);
}
}
private void sendAndValidate(CountDownLatch latch) throws InterruptedException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
Assert.isTrue(latch.await(10, TimeUnit.SECONDS), "Foo");
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class RetryTemplatePerConsumerBindingApp {
@Bean
public java.util.function.Consumer<KStream<Object, String>> process(@Lazy @Qualifier("process-in-0-RetryTemplate") RetryTemplate retryTemplate) {
return input -> input
.process(() -> new Processor<Object, String>() {
@Override
public void init(ProcessorContext processorContext) {
}
@Override
public void process(Object o, String s) {
retryTemplate.execute(context -> {
LATCH1.countDown();
throw new RuntimeException();
});
}
@Override
public void close() {
}
});
}
@Bean
public BiConsumer<KTable<?, ?>, GlobalKTable<?, ?>> tableTypes() {
return (t, g) -> {
};
}
}
@EnableAutoConfiguration
public static class CustomRetryTemplateApp {
@Bean
@StreamRetryTemplate
RetryTemplate fooRetryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
RetryPolicy retryPolicy = new SimpleRetryPolicy(4);
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1);
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate;
}
@Bean
public java.util.function.Consumer<KStream<Object, String>> process() {
return input -> input
.process(() -> new Processor<Object, String>() {
@Override
public void init(ProcessorContext processorContext) {
}
@Override
public void process(Object o, String s) {
fooRetryTemplate().execute(context -> {
LATCH2.countDown();
throw new RuntimeException();
});
}
@Override
public void close() {
}
});
}
}
}

View File

@@ -0,0 +1,121 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Date;
import java.util.function.Function;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.KeyValueSerdeResolver;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.util.Assert;
public class SerdesProvidedAsBeansTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true);
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKstreamWordCountFunction() throws NoSuchMethodException {
SpringApplication app = new SpringApplication(SerdeProvidedAsBeanApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final Method method = SerdeProvidedAsBeanApp.class.getMethod("process");
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, SerdeProvidedAsBeanApp.class);
final KeyValueSerdeResolver keyValueSerdeResolver = context.getBean(KeyValueSerdeResolver.class);
final BindingServiceProperties bindingServiceProperties = context.getBean(BindingServiceProperties.class);
final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
final ConsumerProperties consumerProperties = bindingServiceProperties.getBindingProperties("process-in-0").getConsumer();
final KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
final Serde<?> inboundValueSerde = keyValueSerdeResolver.getInboundValueSerde(consumerProperties, kafkaStreamsConsumerProperties, resolvableType.getGeneric(0));
Assert.isTrue(inboundValueSerde instanceof FooSerde, "Inbound Value Serde is not matched");
final ProducerProperties producerProperties = bindingServiceProperties.getBindingProperties("process-out-0").getProducer();
final KafkaStreamsProducerProperties kafkaStreamsProducerProperties = kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
final Serde<?> outboundValueSerde = keyValueSerdeResolver.getOutboundValueSerde(producerProperties, kafkaStreamsProducerProperties, resolvableType.getGeneric(1));
Assert.isTrue(outboundValueSerde instanceof FooSerde, "Outbound Value Serde is not matched");
}
}
static class FooSerde<T> implements Serde<T> {
@Override
public Serializer<T> serializer() {
return null;
}
@Override
public Deserializer<T> deserializer() {
return null;
}
}
@EnableAutoConfiguration
public static class SerdeProvidedAsBeanApp {
@Bean
public Function<KStream<String, Date>, KStream<String, Date>> process() {
return input -> input;
}
@Bean
public Serde<Date> fooSerde() {
return new FooSerde<>();
}
}
}

Some files were not shown because too many files have changed in this diff Show More