Compare commits

..

358 Commits

Author SHA1 Message Date
buildmaster
2eedc8a218 Update SNAPSHOT to 3.1.0 2020-12-21 12:27:25 +00:00
buildmaster
49d263eafc Going back to snapshots 2020-12-11 15:12:31 +00:00
buildmaster
94f398a3e4 Update SNAPSHOT to 3.1.0-RC1 2020-12-11 15:11:13 +00:00
Soby Chacko
7cba801bef Checkstyle cleanup 2020-12-10 21:35:16 -05:00
Soby Chacko
db14154398 Cleanup docs in Kafka Streams binder
- Add StreamListener deprecation notice in docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1005
2020-12-10 21:30:57 -05:00
Soby Chacko
386a361a66 Event type based routing in Kafka Streams binder
Introducing the capability of routing records based on
event types. If a header in the incoming record contains
the event type set on the binding, then the function
associated with that binding gets invoked.

Adding test/docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003
2020-12-10 21:18:00 -05:00
hemeda3
27cb6f4b5f Fixed typo springc to spring
Fixed typo `springc` to `spring` in two lines: 
`spring.cloud.stream.function.bindings.process-in-0=users`
`spring.cloud.stream.function.bindings.process-in-0=regions`
2020-12-09 14:23:05 -05:00
Soby Chacko
58ac92ec71 Fix broken documentation links
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/986
2020-12-04 18:09:32 -05:00
Soby Chacko
d1c62bbc26 Kafka Streams binder producer/consumerProperties
Fix the issue of Kafka Streams binder level producerProperties
and consumerProperties (...binder.producerProperties and
...binder.consumerProperties), are not taken into consideration.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/997
2020-12-04 17:54:54 -05:00
Soby Chacko
afe13cc045 Adding docs for Meter filtering
KafkaBinderMetrics docs for meter fitlering.
Refact METRIC_NAME to OFFSET_LAG_METRIC_NAME and make it public.
2020-12-04 16:52:20 -05:00
Soby Chacko
42c9af019e Prodcer/Consumer config customizer changes
* Provide binding and destination names to the configure method
  in ProducerConfigCustomizer and ConsumerConfigCustomizer
  so that those can be used in the customization.
* Modify tests and docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/996
2020-12-04 16:16:10 -05:00
Soby Chacko
675c2e4940 KafkaBinderMetrics NoopGauge/filtering changes (#1001)
* KafkaBinderMetrics NoopGauge/filtering changes

Fixing the problem of KafkBinderMetrics scheduled task
for finding the offset lag gets triggered, even when the
guage is registered as a NoopGague in the MeterRegistry.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/995

* Addressing PR review comments

* Fix typo
2020-12-04 16:04:09 -05:00
Soby Chacko
a1b31e67c4 Support allowNonTransactional config
Add a new producer extended property for allowNonTransactional.
When set to true, records published to this output binding will
not be run in a transaction, unless one is already in process.

By default, all ouput bindings associated with a transactional
binder publishes in a new transaction. This new property can be
used to override this behavior.

Addressing PR review comments.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/990
2020-12-04 11:49:13 -05:00
Soby Chacko
5a7cc9f257 Fix Serde inference issues in Kafka Streams binder
When there are multiple functions present with different outbound target types,
there is an issue of one function overriding the target type of a previous function
in the catalogue where the binder stores the target type information.
This causes problems for the binder initiated Serde inference. Addressing the issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/994
2020-12-03 19:43:07 -05:00
Soby Chacko
fc184ba422 Setters for fields that need customization
See this commit in core fore details:

442c72b37c
2020-11-20 14:38:36 -05:00
Taras Danylchuk
526627854a Altering existing topics only allowed if opt-in
Altering existing topic configurations based on a new binder property.
Existing topic configurations are only modified if `autoAlterTopics` is
enabled. By default, this is disabled.

Adding tests to verify.

Docs.

Logging level changes.

Checkstyle fixes
2020-11-19 15:36:09 -05:00
Soby Chacko
6be541c070 Copy cert files from classpath to file-system (#989)
* Copy cert files from classpath to file-system

If `ssl.truststore.location` and `ssl.keystore.location` are
provided as classpath resources, convert them to absolute paths
on the filesystem. This is because of a restriction in the Kafka client
in which it does not allow certificates to be read from the classpath.

See these issues for more details:
https://issues.apache.org/jira/browse/KAFKA-7685
https://cwiki.apache.org/confluence/display/KAFKA/KIP-398%3A+Support+reading+trust+store+from+classpath

This commit allows the Spring Cloud Stream application to provide the
cert files as classpath: reosources, but the binder internally move
them to a locations on the local filesystem and then use that absolute
path as the value for cert locations. Adding properties for optional paths
to move the files to. If no values are provided for these properties,
then use the system's /tmp directory.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/985

* Addressing PR review comments

* Addressing further PR review comments

* Consolidate keystore/truststore filesystem locations into a single property

* Addressing PR review
2020-11-17 13:49:04 -05:00
buildmaster
1afb22d65f Going back to snapshots 2020-11-17 16:04:44 +00:00
buildmaster
097fb89d9e Update SNAPSHOT to 3.1.0-M4 2020-11-17 16:03:30 +00:00
Oleg Zhurakousky
0ad0a31b4e Disabled JAAS test 2020-11-17 16:53:27 +01:00
Oleg Zhurakousky
19e97dd9c4 Update versions of spring kafka and spring integration kafka 2020-11-17 16:26:07 +01:00
gustavomonarin
676764b923 Fix documentation for kafka streams concurrency on binder level
The property spring.cloud.stream.binder.configuration.num.stream.threads does not work and is silently ignored.

The property spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads works fine and  is already covered by tests at MultipleFunctionsInSameAppTests#125

resolves #987
2020-11-11 13:55:35 -05:00
Soby Chacko
d8a678c77e Fix KafkaBinderTests - testRecordMetadata 2020-11-11 13:40:19 -05:00
Soby Chacko
23ce9e3d6e Fix NPE in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/981
2020-11-05 12:34:59 -05:00
Soby Chacko
73921db3ec Fixing test failure
Fixing Kafka Streams binder retry tests due to a
wrong conditional check.
2020-11-05 11:05:22 -05:00
Soby Chacko
f1e3a0bdd6 Allow retries in Kafka Streams binder (#980)
* Allow retries in Kafka Streams binder

Provide applications the capability to retry critical sections of the
business logic. This is accomplished through a new API using which
critical path can be wrapped inside a Callable.

Adding tests and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/945

* Reworking the PR

Remove the binder API that was added before for retrying.
Reuse binding provided RetryTemplate.

Tests and docs

* Cleanup

* Addressing PR review comments

* Fix typo
2020-10-30 16:39:49 -04:00
Soby Chacko
97e3b61d14 Adding improvements to InteractiveQueryService
New API additions.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/942
2020-10-28 18:58:18 -04:00
Soby Chacko
50f4470fcf Update deprecated API usage in InteractiveQueryService
Use queryMetadataForKey instead of metadataForKey

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/941
2020-10-28 15:24:09 -04:00
Soby Chacko
45f1927c6f Expand messageKeyExpression docs (#978)
* Expand messageKeyExpression docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/972

* Update docs/src/main/asciidoc/overview.adoc

Co-authored-by: Gary Russell <grussell@vmware.com>

Co-authored-by: Gary Russell <grussell@vmware.com>
2020-10-27 11:10:33 -04:00
Soby Chacko
0a0d3a1057 Remove duplicate KafkaStreams topology endpoint
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/895
2020-10-22 12:31:40 -04:00
Soby Chacko
5cdd8a09f9 Custom DLQ Destination Resolver (#976)
* Custom DLQ Destination Resolver

Allow applications to provide a custom DLQ destination resolver
implementaiton by providing a new interface DlqDestinationResolver
as part of binder's public contract. This interface is a BiFunction
extension using which the applications can provide more fine grained
control over where to route records in error.

Adding test to verify.

Adding docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/966

* Add DlqDestinationResolver to MessageChannel based binder.

Tests and docs
2020-10-21 12:02:02 -04:00
Soby Chacko
514530db22 Upgrade dependencies
Spring Kafka -> 2.6.3-SNAPSHOT
Spring Integration Kafka -> 5.4.0-SNAPSHOT
Kafka version -> 2.6.0
Use Kafka_2.13 for tests

Ungignore the Jaas security tests.
Unignore a few Kafka Streams binder tests.
2020-10-16 16:44:32 -04:00
Gary Russell
50ec8f0919 Change Log etc. Replication Factor
If the user has not explicitly set the `SreamsConfig.REPLICATION_FACTOR_CONFIG`,
set it from the binder property.

This is used for infrastructure topics (change logs and repartition topics).
2020-10-15 16:22:41 -04:00
Gary Russell
829d1c651a GH-968: Propagate Application Context
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/968
2020-10-12 13:59:25 -04:00
Soby Chacko
200a5efb64 Simplifying bootstrap-server config evaluation
If both boot and binder level config for bootstrap servers are present, the boot
one always wins currently regardless of any binder settings, unless the boot one
evaluates to the default (localhost:9092). This is especially a problem in a
multi binder scenario. Addressing this issue by simplifying the evaluation and
always gives the binder config the highest precedence.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/967
2020-10-12 13:18:47 -04:00
Soby Chacko
7f3a7f856f Kafka binder metrics improvements (#965)
* Kafka binder metrics improvements

KafkaBinderMetrics has a blocking call in which it waits for the default timeout of
60 seconds if Kafka broker is down. This happens for each topic within a consumer
group. Refactor this code, so that we have this check performed in a periodic task
and if the runtime check fails to return within a smaller timewindow (5 seconds),
return immediately by providing the latest value from the periodic task results.

Periodic task for computing the lags is run every 60 seconds.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/809

* Addressing PR review
2020-10-05 13:19:01 -04:00
buildmaster
1358bdfeec Going back to snapshots 2020-10-02 14:10:39 +00:00
buildmaster
f8dac888e4 Update SNAPSHOT to 3.1.0-M3 2020-10-02 14:09:20 +00:00
Soby Chacko
50380dae69 Ignore a few Kafka Streams tests temporarily 2020-10-02 09:12:22 -04:00
Oleg Zhurakousky
23b9d5b4c6 Temporary disable failing tests 2020-10-02 12:59:18 +02:00
buildmaster
7bebe9b78f Bumping versions 2020-09-25 10:42:44 +00:00
Oleg Zhurakousky
c44c17008c Fix docs links 2020-09-24 16:22:55 +02:00
Soby Chacko
33fa713a9f Customizing producer/consumer factories (#963)
* Customizing producer/consumer factories

Adding hooks by providing Producer and Consumer config
customizers to perform advanced configuration on the producer
and consumer factories.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/960

* Addressing PR review comments

* Further PR updates
2020-09-23 13:21:31 -04:00
Soby Chacko
769ed56049 Deprecate autoCommitOffset in favor of ackMode (#957)
* Deprecate autoCommitOffset in favor or ackMode

Deprecate autoCommitOffset in favor of using a newly introduced consumer property ackMode.
If the consumer is not in batch mode and if ackEachRecord is enabled, then container
will use RECORD ackMode. Otherwise, use the provided ackMode using this property.
If none of these are true, then it will defer to the default setting of BATCH ackMode
set by the container.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/877

* Address PR review comments

* Addressing PR review comments
2020-09-21 11:16:51 -04:00
Heiko Does
859952b32a GH-951 update TopicConfig when diffrent from topic properties
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/951
2020-09-11 16:02:52 -04:00
cleverchuk
1e9aa60c4e Fix Health indicator/partition leader issues
* Fix health indicator to properly indicate partition failure
 * Add new flag to control binder health indicator behavior
 * Regardless of the consumer that is reading from a partition, if the binder
   detects that a partition for the topic is without a leader, mark the binder
   health as DOWN (if the flag is set to true).
 * Remove synchronize block since only one thread executes the block
 * Add Docs for the new binder flag
 * Fix checkstyle issues
2020-09-11 12:06:28 -04:00
Soby Chacko
4161f875ed Change default replication factor to -1 (#956)
* Change default replication factor to -1

Binder now uses a default value of -1 for replication factor signaling the
broker to use defaults. Users who are on Kafka brokers older than 2.4,
need to set this to the previous default value of 1 used in the binder.

In either case, if there is an admin policy that requires replication factor > 1,
then that value must be used instead.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/808

* Addressing PR review comments
2020-08-28 12:18:53 -04:00
Soby Chacko
f2e543c816 Remove unnecessary configs from binder
Following two properties are removed from the producer configs in the binder.

(ProducerConfig.RETRIES_CONFIG, 0)
(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432)

In the case of RETRIES, this is a bug to override the default in Kafka client (MAX_INT) and
in the latter case, this is unnecessary as this is the same default in the client.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/952
2020-08-26 18:00:45 -04:00
Soby Chacko
ac4d9298c9 Fix test failures in KafkaBinderMeterRegistryTest 2020-08-26 17:11:49 -04:00
ncheema
34c4efb35c Fix micrometer configuration for multiBinder
- In MultiBinder configuration, MeterRegistery is loaded in the outterContext,
   hence removing the conditional on MeterRegistry bean check

 - Fix checkstyle issues
2020-08-18 17:59:52 -04:00
Aldo Sinanaj
5b3b92cdb9 GH-932: Added zstd compression type 2020-08-13 15:57:44 -04:00
Christian Tzolov
a6ac6e3221 Fix mime type all support for KafkaNullConverter 2020-08-06 14:21:23 -04:00
Oleg Zhurakousky
6280489be8 Ad RSocket snapshot repo 2020-08-06 19:26:28 +02:00
Navjot Cheema
80df80806b Producer compressionType documentation updated 2020-07-20 13:59:34 -04:00
buildmaster
ec52fbe2eb Going back to snapshots 2020-07-20 16:28:42 +00:00
buildmaster
5e128aafdc Update SNAPSHOT to 3.1.0-M2 2020-07-20 16:27:22 +00:00
Oleg Zhurakousky
f8d0b8bde2 Change SIK version to 3.3.0.RELEASE 2020-07-20 18:13:04 +02:00
Soby Chacko
4159217023 Kafka Streams metrics changes
* Kafka Streams metrics in the binder for Boot 2.2 users are streamlined to
  reflect the Micrometer native support added with version 1.4.0 which is
  available through Boot 2.3. While Boot 2.3 users will get this native support
  from  Micrometer, Boot 2.2 users will still rely on the custom implementation
  in the binder. This commit aligns that custom implemenation more with
  the native implementation.

* Disable the custom Kafka Streams metrics bean which is mentioned above
  (KafkaStreamsBinderMetrics) when the application is on Boot 2.3, as this
  implementation is only applicable for Boot 2.2.x.

* Update docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/880
2020-07-17 20:28:47 -04:00
Jay Bryant
087b09d193 Wording changes (#934)
* Wording changes

Replacing some terms

* Wording changes

One more change (missed a gerund form).
2020-07-14 11:34:54 -04:00
Gary Russell
83c42a55e9 GH-935: Fix KafkaNullConverter supported MimeType
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/935

Spring Cloud Function now checks if a converter supports the mime type before
calling it. Previously, the converter supported no mime types, so it was never
called, breaking Kafka Tombstone record processing (outbound).

The converter must support all mime types so it can perform a no-op conversion,
retaining the `KafkaNull`.

The abstract converter will return null whenever the payload is not a `KafkaNull`,
which is a signal to spring-cloud-function to try the next converter.
2020-07-13 17:24:22 -04:00
Gary Russell
1edc2c714a Upgrade parent pom to spring-kafka 2.5.3
- match the version used by Boot 2.4.0-SNAPSHOT
- also SIK 3.3.1.BUILD-SNAPSHOT
2020-07-13 14:21:13 -04:00
Soby Chacko
dc7c8d657a Fixing javadoc errors
Resovles https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/915
2020-06-16 18:37:30 -04:00
Soby Chacko
a549810899 Renaming Kafka Streams topology actuator endpoint
Kafka Streams topology actuator endpoint had a conflict with the JMX exporter
and was causing some IDE issues. Renming this endpoint to kafkastreamstopology.
Renaming the underlying methods in this actuator endpoint implentation.

Updating docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/895
2020-06-15 18:56:18 -04:00
Soby Chacko
aff8b8487c Add junit-vintage-engine dependency
With Boot 2.4, JUnit 4 tests need this dependency.

Fix tests.
2020-06-15 17:01:48 -04:00
Gary Russell
95bc54b991 GH-914: Add Micrometer Producer/Consumer Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/914

Boot no longer uses the deprecated JMX MBean scraping provided by Micrometer.

Add configuration to add Micrometer Meters when Micrometer and spring-kafka 2.5.x are on
the class path.

Micrometer for Streams

- work around until SK 2.5.3
2020-06-15 11:36:49 -04:00
Marcin Grzejszczak
38bea7e7da Added missing JAR packaging 2020-06-08 18:42:49 +02:00
Marcin Grzejszczak
201feca206 Migrated to new documentation generation 2020-06-08 18:41:25 +02:00
Soby Chacko
21a8f89f22 Update version to 3.1.0-SNAPSHOT 2020-06-02 13:58:54 -04:00
Soby Chacko
7fc9145a21 Fix a failing test due to application ID clash 2020-06-02 12:56:08 -04:00
Soby Chacko
473ca53723 Configuring Kafka producer timeout
Allow setting timeout for closing the producer.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/891
Resolves #909
2020-06-02 12:54:42 -04:00
Soby Chacko
788dc28d7e Simplify the handling of concurrency
In Kafka Streams binder, we use a complex strategy to handle
applications provided concurrency. Simplify this process while
making the binder stay aligned with boot -> binder -> binding
hierrachy for concurrency (num.stream.threads).

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/905
Resolves #907
2020-06-02 12:42:14 -04:00
Soby Chacko
dbc00ffa92 Docs changes for consumer batch.mode
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/892\#issuecomment-631862705
2020-06-02 12:38:44 -04:00
Soby Chacko
240ae8282e KafkaTopicProvisioner improvements
Use a retry template for topic description method call through
admin client when provisioning producer destinations. We are
retrying because in the event this operation gets failed, it
is retried with the default retry settings in the provisioner.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/888
2020-06-02 12:38:29 -04:00
Soby Chacko
7adb10bcd2 Adding state store beans for KTable binding
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/897
2020-06-02 12:38:14 -04:00
Soby Chacko
8be9fe6abc Ignore test 2020-06-02 12:37:51 -04:00
Soby Chacko
04e0bf628c Kafka Streams binder test triage.
Fixing two tests in Kafka Streams binder that fail with the full suite.
See the comments on the code committed for more details.
2020-06-02 12:37:37 -04:00
Soby Chacko
79323588ba Ignore a test temporarily 2020-06-02 12:37:17 -04:00
Soby Chacko
395683fdaf Fixing random test failure on CI 2020-06-02 12:36:55 -04:00
Soby Chacko
2a25f7c1f3 Cleanup test 2020-06-02 12:36:13 -04:00
Soby Chacko
5fa88e11ec Fix checkstyle 2020-06-02 12:35:45 -04:00
Soby Chacko
ee3279072d Fixing a random test failure on CI 2020-06-02 12:35:20 -04:00
Soby Chacko
fa0d6fe60e GH-899: num.stream.threads not taking effect
Binder is overriding the num.stream.threads property specified
through the binder configuration. Fixing this issue.

Adding tests to verify.

Doc changes

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/899
2020-06-02 12:34:50 -04:00
Soby Chacko
6578a996cb Update versions 2020-06-02 12:33:48 -04:00
Roberto Matas
af5778d157 Property name min.fetch.bytes renamed to the correct one 2020-05-21 17:01:35 -04:00
Soby Chacko
a190289bff Kafka Streams binder changes
Update API calls in health indicator.
Update tests.
Ignore two tests temporarily
2020-04-30 21:42:49 -04:00
Gary Russell
7091a641d6 Fix test: mock Producer to return a Future 2020-04-30 17:57:38 -04:00
Soby Chacko
c408f91162 Update versions
Spring Kafka -> 2.5.0.RC1
Kafka client -> 2.5.0
Spring Integration Kafka -> 3.3.0.RC1
2020-04-30 17:46:19 -04:00
Gary Russell
1cfb63f4bf Reject improper settings for bootstrap.servers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/875

- the `bootstrap.servers` property cannot be overridden at the binding level.
2020-04-30 17:08:03 -04:00
Soby Chacko
1bf1393c6e Fix the failing test
The test was failing due to wrong broker address.
2020-04-16 12:33:11 -04:00
Soby Chacko
85663ebea7 Different topic names in test for DLQ.
Unignore the test that was ignored previously.
2020-04-16 11:47:14 -04:00
Soby Chacko
c17752b02d Troubleshooting a CI test failure 2020-04-16 11:32:41 -04:00
Soby Chacko
52c0b35add GH-657: DLQ producer properties from the binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/657

Adding the ability for the binder to detect DLQ producer properties
set on the binder as common producer properties.

Adding test to verify.
2020-04-16 10:15:23 -04:00
buildmaster
36f00e5867 Going back to snapshots 2020-04-07 17:19:16 +00:00
buildmaster
e7487b9ada Update SNAPSHOT to 3.1.0.M1 2020-04-07 17:18:40 +00:00
Oleg Zhurakousky
064f5b6611 Updated spring-kafka version to 2.4.4 2020-04-07 19:05:46 +02:00
Oleg Zhurakousky
ff859c5859 update maven wrapper 2020-04-07 18:45:35 +02:00
Gary Russell
445eabc59a Transactional Producer Doc Polishing
- explain txId requirements for producer-initiated transactions
2020-04-07 11:16:14 -04:00
Soby Chacko
cc5d1b1aa6 GH:851 Kafka Streams topology visualization
Add Boot actuator endpoints for topology visualization.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/851
2020-03-25 13:02:57 +01:00
Soby Chacko
db896532e6 Offset commit when DLQ is enabled and manual ack (#871)
* Offset commit when DLQ is enabled and manual ack

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
2020-03-24 16:28:39 -04:00
Gary Russell
07a740a5b5 GH-861: Add transaction manager bean override
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/861

Add docs
2020-03-19 17:03:36 -04:00
Serhii Siryi
65f8cc5660 InteractiveQueryService API changes
* Add InteractiveQueryService.getAllHostsInfo to fetch metadata
   of all instances that handles specific store.
 * Test to verify this new API (in the existing InteractiveQueryService tests).
2020-03-05 13:24:32 -05:00
Soby Chacko
90a47a675f Update Kafka Streams docs
Fix missing property prefix in StreamListener based applicationId settings.
2020-03-05 11:40:10 -05:00
Soby Chacko
9d212024f8 Updates for 3.1.0
spring-cloud-build to 3.0.0 snapshot
spring kafka to 2.4.x snapshot
SIK 3.2.1

Remove a test that has behavior inconsistent with new changes in Spring Kafka 2.4
where all error handlers have isAckAfterHandle() true by default. The test for
auto commit offset on error without dlq was expecting this acknowledgement to not
to occur. If applications need to have the ack turned off on error, they should provide a
container customizer where it sets the ack to false. Since this is not a binder
concern, we are removing the test testDefaultAutoCommitOnErrorWithoutDlq.

Cleaning up in Kafka Streams binder tests.
2020-03-04 18:37:08 -05:00
Gary Russell
d594bab4cf GH-853: Don't propagate out "internal" headers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/853
2020-02-27 13:37:58 -05:00
Oleg Zhurakousky
e46bd1f844 Update POMs for Ivyland 2020-02-14 11:28:28 +01:00
buildmaster
5a8aad502f Bumping versions to 3.0.3.BUILD-SNAPSHOT after release 2020-02-13 08:31:42 +00:00
buildmaster
b84a0fdfc6 Going back to snapshots 2020-02-13 08:31:41 +00:00
buildmaster
e3fcddeab6 Update SNAPSHOT to 3.0.2.RELEASE 2020-02-13 08:30:55 +00:00
Soby Chacko
1b6f9f5806 KafkaStreamsBinderMetrics throws ClassCastException
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/814

Fixing an erroneous cast.
2020-02-12 11:53:19 -05:00
Oleg Zhurakousky
f397f7c313 Merge pull request #845 from sobychacko/gh-844
Kafka streams concurrency with multiple bindings
2020-02-12 16:38:53 +01:00
Oleg Zhurakousky
7e0b923c05 Merge pull request #843 from sobychacko/gh-842
Kafka Streams binder health indicator issues
2020-02-12 16:37:11 +01:00
Soby Chacko
0dc5196cb2 Fix typo in a class name 2020-02-11 17:55:50 -05:00
Soby Chacko
1cc50c1a40 Kafka streams concurrency with multiple bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844

Fixing an issue where the default concurrency settings are overridden when
there are multiple bindings present with non-default concurrency settings.
2020-02-11 13:21:58 -05:00
Adriano Scheffer
0b3d508fe2 Always call value serde configure method
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836

Remove redundant call to serde configure

Closes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836
2020-02-10 19:57:37 -05:00
Soby Chacko
2acce00c74 Kafka Streams binder health indicator issues
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/842

When a health indicator is run against a topic with multiple partitions,
the Kafka Streams binder overwrites the information. Addressing this issue.
2020-02-10 19:42:27 -05:00
Łukasz Kamiński
ce0376ad86 GH-830: Enable usage of authorizationExceptionRetryInterval
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/830

Enable binder configuration of authorizationExceptionRetryInterval through properties
2020-01-17 11:18:02 -05:00
Soby Chacko
25ac3b75e3 Remove log4j from Kafka Streams binder 2020-01-07 17:05:00 -05:00
Gary Russell
6091a1de51 Provisioner: Fix compiler warnings
- logger is static
- missing javadocs
2020-01-06 13:52:15 -05:00
Soby Chacko
5cfce42d2e Kafka Streams multi binder configuration
In the mutli binder scenario, make KafkaBinderConfigurationProperties conditional
so that it only creates such a bean in the correspodig multi binder context.
For normal cases, KafkaBinderConfigurationProperties bean is created by the main context.
2019-12-23 18:59:16 -05:00
buildmaster
349fc85b4b Bumping versions to 3.0.2.BUILD-SNAPSHOT after release 2019-12-18 18:12:34 +00:00
buildmaster
a4762ad28b Going back to snapshots 2019-12-18 18:12:34 +00:00
buildmaster
bdd4b3e28b Update SNAPSHOT to 3.0.1.RELEASE 2019-12-18 18:11:50 +00:00
Soby Chacko
3ff99da014 Update parent s-c-build to 2.2.1.RELEASE 2019-12-18 12:07:39 -05:00
Soby Chacko
c7c05fe3c2 Docs for topic patterns in Kafka Streams binder 2019-12-17 18:43:45 -05:00
문찬용
4b50f7c2f5 Fix typo 2019-12-17 18:41:13 -05:00
stoetti
88c5aa0969 Supporting topic pattern in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/820

Fix checkstyle issues
2019-12-17 18:38:36 -05:00
Oleg Zhurakousky
d5a72a29d0 GH-815 polishing
Resolves #816
2019-12-17 10:55:47 +01:00
Soby Chacko
8c3cb8230b Addressing mulit binder issues with Kafka Streams
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/815

There was an issue with Kafka Streams multi binders in which the properties were not scanned
properly. The last configuation is always won, wiping out any prevous enviroment properties.
Addressing this issue by properly keeping KafkaBinderConfigurationProperties per multi binder
environment and explicity invoking Boot properties binding on them.

Adding test to verify.
2019-12-12 18:08:53 -05:00
Soby Chacko
70c1ce8970 Update image for spring.io kafka streams blog 2019-12-02 11:22:39 -05:00
Soby Chacko
1d8a4e67d2 Update image for spring.io kafka streams blog 2019-12-02 11:16:29 -05:00
buildmaster
bbfc776395 Bumping versions to 3.0.1.BUILD-SNAPSHOT after release 2019-11-22 14:51:01 +00:00
buildmaster
4e9ed30948 Going back to snapshots 2019-11-22 14:51:01 +00:00
buildmaster
34fd7a6a7a Update SNAPSHOT to 3.0.0.RELEASE 2019-11-22 14:49:08 +00:00
Oleg Zhurakousky
b23b42d874 Ignoring intermittently failing test 2019-11-22 15:36:23 +01:00
Soby Chacko
cf59cfcf12 Kafka Streams docs cleanup 2019-11-20 18:25:12 -05:00
Soby Chacko
02a4fcb144 AdminClient caching in KafkaStreams health check 2019-11-20 17:36:14 -05:00
Oleksii Mukas
6ac9c0ed23 Closing adminClient prematurely
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/803
2019-11-20 17:34:09 -05:00
Soby Chacko
78ff4f1a70 KafkaBindingProperties has no documentation (#802)
* KafkaBindingProperties has no documentation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/739

No popup help in IDE due to missing javadocs in KafkaBindingProperties,
KafkaConsumerProperties and KafkaProducerProperties.

Some IDEs currently only show javadocs on getter methods from POJOs
used insdide a map. Therefore, some javadocs are duplicated between
fields and getter methods.

* Addressing PR review comments

* Addressing PR review comments
2019-11-14 12:50:15 -05:00
Soby Chacko
637ec2e55d Kafka Streams binder docs improvements
Adding docs for production exception handlers.
Updating configuration options section.
2019-11-13 15:19:41 -05:00
Soby Chacko
6effd58406 Adding docs for global state stores 2019-11-13 11:19:36 -05:00
Soby Chacko
7b8f0dcab7 Kafka Streams - DLQ control per consumer binding (#801)
* Kafka Streams - DLQ control per consumer binding

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/800

* Fine-grained DLQ control and deserialization exception handlers per input binding

* Deprecate KafkaStreamsBinderConfigurationProperties.SerdeError in preference to the
  new enum `KafkaStreamsBinderConfigurationProperties.DeserializationExceptionHandler`
  based properties

* Add tests, modifying docs

* Addressing PR review comments
2019-11-13 09:41:02 -05:00
Gary Russell
88912b8d6b GH-715: Add retry, dlq for transactional binders
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/715

When using transactions, binder retry, dlq cannot be used because the retry
runs within the transaction, which is undesirable if there is another resource
involved; also publishing to the DLQ could be rolled back.

Use the retry properties to configure an `AfterRollbackProcessor` to perform the
retry and DLQ publishing after the transaction has rolled back.

Add docs; move the container customizer invocation to the end.
2019-11-12 14:53:12 -05:00
Soby Chacko
34b0945d43 Changing the order of calling customizer
In the Kafka Streams binder, StreamsBuilderFactoryBean customzier was being called
prematurely before the object is created. Fixing this issue.

Add a test to verify
2019-11-11 17:21:46 -05:00
Gary Russell
278ba795d0 Configure consumer customizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1841

* Add test
2019-11-11 11:34:41 -05:00
buildmaster
bf30ecdea1 Going back to snapshots 2019-11-08 16:29:53 +00:00
buildmaster
464ce685bb Update SNAPSHOT to 3.0.0.RC2 2019-11-08 16:29:19 +00:00
Oleg Zhurakousky
54fa9a638d Added flatten plug-in 2019-11-08 16:50:10 +01:00
Soby Chacko
fefd9a3bd6 Cleanup in Kafka Streams docs 2019-11-07 18:07:27 -05:00
Soby Chacko
8e26d5e170 Fix typos in the previous PR commit 2019-11-06 21:23:19 -05:00
Soby Chacko
ac6bdc976e Reintroduce BinderHeaderMapper (#797)
* Reintrouce BinderHeaderMapper

Provide a  custom header mapper that is identical to the DefaultKafkaHeaderMapper in Spring Kafka.
This is to address some interoperability issues between Spring Cloud Stream 3.0.x and 2.x apps,
where mime types in the header are not de-serialized properly. This custom BinderHeaderMapper
will be eventually deprecated and removed once the fixes are in Spring Kafka.

Resolves #796

* Addressing review

* polishing
2019-11-06 21:14:20 -05:00
Ramesh Sharma
65386f6967 Fixed log message to print as value vs key serde 2019-11-06 09:43:08 -05:00
Oleg Zhurakousky
81c453086a Merge pull request #794 from sobychacko/gh-752
Revise docs
2019-11-06 09:11:16 +01:00
Soby Chacko
0ddd9f8f64 Revise docs
Update kafka-clients version
Revise Kafka Streams docs

Resolves #752
2019-11-05 19:49:07 -05:00
Gary Russell
d0fe596a9e GH-790: Upgrade SIK to 3.2.1
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/790
2019-11-05 11:41:40 -05:00
Soby Chacko
062bbc1cc3 Add null check for KafkaStreamsBinderMetrics 2019-11-01 18:42:30 -04:00
Soby Chacko
bc1936eb28 KafkaStreams binder metrics - duplicate entries
When the same metric name is repeated, there are some registry implementations such as the
micrometer Prometheus registry fail to register the duplicate entry. Fixing this issue by
restricting the duplicate metric names not to be registered.

Also, address an issue with multiple processors and metrics in the same application
by prepending the application ID of the Kafka Streams processor in the metric name itself.

Resolves #788
2019-11-01 15:55:30 -04:00
Soby Chacko
e2f1092173 Kafka Streams binder health indicator improvements
Caching AdminClient in Kafk Streams binder health indicator.

Resolves #791
2019-10-31 16:15:56 -04:00
Soby Chacko
06e5739fbd Addressing bugs reported by Sonarqube
Resolves #747
2019-10-25 16:42:02 -04:00
Soby Chacko
ad8e67fdc5 Ignore a test where there is a race condition 2019-10-24 12:00:22 -04:00
Soby Chacko
a3fb4cc3b3 Fix state store registration issue
Fixing the issue where state store is registered for each input binding
when multiple input bindings are present.

Resolves #785
2019-10-24 11:33:31 -04:00
Soby Chacko
7f09baf72d Enable customization on StreamsBuilderFactoryBean
Spring Kafka provides a StreamsBuilderFactoryBeanCustomizer. Use this in the binder so that the
applicatons can plugin in such a bean to further customize the StreamsBuilderFactoryBean and KafkaStreams.

Resolves #784
2019-10-24 09:35:37 -04:00
Soby Chacko
28a02cda4f Multiple functions and definition property
In order to make Kafka Streams binder based function apps more consistent
with the wider functional support in ScSt, it should require the proprety
spring.cloud.stream.fucntion.definition to signal which functions to activate.

Resolves #783
2019-10-24 01:01:34 -04:00
Soby Chacko
f96a9f884c Custom partitioner for Kafka Streams producer
* Allow the ability to plug in custom StreamPartitioner on the Kafka Streams producer.
* Fix a bug where the overriding of native encoding/decoding settings by the binder was not
  workging properly. This fix is done by providing a custom ConfigurationPropertiesBindHandlerAdvisor.
* Add test to verify

Resolves #782
2019-10-23 20:12:08 -04:00
Soby Chacko
a9020368e5 Remove the usage of BindingProvider 2019-10-23 16:32:07 -04:00
Gary Russell
ca9296dbd2 GH-628: Add dlqPartitions property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/628

Allow users to specify the number of partitions in the dead letter topic.

If the property is set to 1 and the `minPartitionCount` binder property is 1,
override the default behavior and always publish to partition 0.
2019-10-23 15:33:20 -04:00
Soby Chacko
e8d202404b Custom timestamp extractor per binding
Currenlty there is no way to pass a custom timestamp extractor
per consumer binding in Kafka Streams binder. Adding this ability.

Resolves #640
2019-10-23 15:31:17 -04:00
Artem Bilan
5794fb983c Add ProducerMessageHandlerCustomizer support
Related to https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/265
and https://github.com/spring-cloud/spring-cloud-stream/pull/1828
2019-10-23 14:30:20 -04:00
Oleg Zhurakousky
1ce1d7918f Merge pull request #777 from sobychacko/gh-774
Null value in outbound KStream
2019-10-23 16:54:21 +02:00
Oleg Zhurakousky
1264434871 Merge pull request #773 from sobychacko/gh-552
Kafka/Streams binder health indicator improvements
2019-10-23 16:53:37 +02:00
Massimiliano Poggi
e4fed0a52d Added qualifiers to CompositeMessageConverterBean injections
When more than one CompositeMessageConverter bean was defined in the same ApplicationContext,
not having the qualifiers on the injection points was causing the application to fail during
instantiation due to bean conflicts being raised. The injection points for CompositeMessageConverter
have been marked with the appropriate qualifier to inject the Spring Cloud CompositeMessageConverter.

Resolves #775
2019-10-22 17:17:03 -04:00
Soby Chacko
05e2918bc0 Addressing PR review comments 2019-10-22 17:07:51 -04:00
Gary Russell
6dadf0c104 GH-763: Add DlqPartitionFunction
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/763

Allow users to select the DLQ partition based on

* consumer group
* consumer record (which includes the original topic/partition)
* exception

Kafka Streams DLQ docs changes
2019-10-22 16:10:29 -04:00
Soby Chacko
f4dcf5100c Null value in outbound KStream
When native encoding is disabled, the conversion on outbound fails if the record
value is a null. Handle this scenario more graceful by allowing the record
to be sent downstream by skipping the conversion.

Resolves #774
2019-10-22 12:45:14 -04:00
Soby Chacko
b8eb41cb87 Kafka/Streams binder health indicator improvements
When both binders are present, there were ambiguities in the way the binders
were reporting health status. If one binder does not have any bindings, the
total health status was reported as down. Fixing these ambiguiltes as below.

If both binders have bindings present and Kafka broker is reachable, report
the status as UP and the associated details. If one of the binder does not
have bindings, but Kafka broker can be reached, then that particular binder's
status will be marked as UNKNOWN and the overall status is reported as UP.
If Kafka broker is down, then both binders are reported as DOWN and
the overall status is marked as DOWN.

Resolves #552
2019-10-21 21:59:11 -04:00
Soby Chacko
82cfd6d176 Making KafkaStreamsMetrics object Nullable 2019-10-18 21:41:16 -04:00
Tenzin Chemi
6866eef8b0 Update overview.adoc 2019-10-18 15:25:32 -04:00
Soby Chacko
b833a9f371 Fix Kafka Streams binder health indicator issues
When there are multiple Kafka Streams processors present, the health indicator
overwrites the previous processor's health info. Addressing this issue.

Resolves #771
2019-10-17 19:26:41 -04:00
Soby Chacko
0283359d4a Add a delay before the metrics assertion 2019-10-15 15:11:05 -04:00
Soby Chacko
cc2f8f6137 Assertion was not commented out in the previous commit 2019-10-10 10:38:46 -04:00
Soby Chacko
855334aaa3 Disable kafka streams metrics test temporarily 2019-10-10 01:32:51 -04:00
Soby Chacko
9d708f836a Kafka streams default single input/output binding
Address an issue in which the binder still default to binding names "input" and "output"
in case of a single function.
2019-10-09 23:58:14 -04:00
Soby Chacko
ecc8715b0c Kafka Streams binder metrics
Export Kafka Streams metrics available through KafkaStreams#metrics into a Micrometer MeterRegistry.
Add documentation for how to access metrics.
Modify test to verify metrics.

Resolves #543
2019-10-09 00:32:16 -04:00
Soby Chacko
98431ed8a0 Fix spurious warnings from InteractiveQueryService
Set applicationId properly in functions with multiple inputs
2019-10-09 00:29:03 -04:00
Oleg Zhurakousky
c7fa1ce275 Fix how stream function properties for bindings are used 2019-10-08 06:05:50 -05:00
Soby Chacko
cc8c645c5a Address checkstyle issues 2019-10-04 09:00:15 -04:00
Gary Russell
21fe9c75c5 Fix race in KafkaBinderTests
`testDlqWithNativeSerializationEnabledOnDlqProducer`
2019-10-03 10:35:05 -04:00
Soby Chacko
65dd706a6a Kafka Streams DLQ enhancements
Use DeadLetterPublishingRecoverer from Spring Kafka instead of custom DLQ components in the binder.
Remove code that is no longer needed for DLQ purposes.
In Kafka Streams, always set group to application id if the user doesn't set an explicit group.

Upgrade Spring Kafka to 2.3.0 and SIK to 3.2.0

Resolves #761
2019-10-02 22:46:51 -04:00
Soby Chacko
a02308a5a3 Allow binding names to be reused in Kafka Streams.
Allow same binding names to be reused from multiple StreamListener methods in Kafka Streams binder.

Resolves #760
2019-10-01 13:58:46 -04:00
Oleg Zhurakousky
ac75f8fecf Merge pull request #758 from sobychacko/gh-757
Function level binding properties
2019-09-30 12:42:45 -04:00
Soby Chacko
bf6a227f32 Function level binding properties
If there are multiple functions in a Kafka Streams application, and if they want to have
a separate set of configuration for each, then it should be able to set that at the function
level. For e.g. spring.cloud.stream.kafka.streams.binder.functions....

Resolves #757
2019-09-30 11:47:21 -04:00
Oleg Zhurakousky
01daa4c0dd Move function constants to core
Resolves #756
2019-09-30 11:33:18 -04:00
Soby Chacko
021943ec41 Using dot(.) character in function bindings.
In Kafka Streams functions, binding names need to use dot character instead of
underscores as the delimiter.

Resolves #755
2019-09-30 11:02:50 -04:00
Phillip Verheyden
daf4b47d1c Update the docs with the correct default channel properties locations
Spring Cloud Stream Kafka uses `spring.cloud.stream.kafka.default` for setting default properties across all channels.

Resolves #748
2019-09-30 11:01:28 -04:00
Soby Chacko
2b1be3754d Remove deprecations
Remove deprecated fields, methods and classes in preparation for the 3.0 GA Release,
both in Kafka and Kafka Streams binders.

Resolves #746
2019-09-27 15:46:00 -04:00
Soby Chacko
db5a303431 Check HeaderMapper bean with a well known name (#753)
* Check HeaderMapper bean with a well known name

* If a custom bean name for header mapper is not provided through the binder property headerMapperBeanName,
  then look for a header mapper bean with the name kafkaHeaderMapper.
* Add tests to verify

Resolves #749

* Addressing PR review comments
2019-09-27 11:08:15 -04:00
Soby Chacko
e549090787 Restoring the avro tests for Kafka Streams 2019-09-23 17:12:31 -04:00
buildmaster
7ff64098a3 Going back to snapshots 2019-09-23 18:12:19 +00:00
buildmaster
cea40bc969 Update SNAPSHOT to 3.0.0.M4 2019-09-23 18:11:47 +00:00
buildmaster
717022e274 Going back to snapshots 2019-09-23 17:49:28 +00:00
buildmaster
d0f1c8d703 Update SNAPSHOT to 3.0.0.M4 2019-09-23 17:48:54 +00:00
Soby Chacko
7a532b2bbd Temporarily disable avro tests due to Schema Registry dependency issues.
Will address these post M4 release
2019-09-23 13:37:57 -04:00
Soby Chacko
3773fa2c05 Update SIK and SK
SIK -> 3.2.0.RC1
    SK - 2.3.0.RC1
2019-09-23 13:03:12 -04:00
buildmaster
5734ce689b Bumping versions 2019-09-18 15:35:31 +00:00
Olga Maciaszek-Sharma
608086ff4d Remove spring.provides. 2019-09-18 13:34:34 +02:00
Soby Chacko
16713b3b4f Rename Serde for message converters
Rename CompositeNonNativeSerde to MessageConverterDelegateSerde
Deprecate CompositeNonNativeSerde
2019-09-16 21:27:04 -04:00
Soby Chacko
2432a7309b Remove an unnecessary mapValue call
In Kafka Streams binder, with native decoding being the default in 3.0 and going forward,
the topology depth of the Kafka Streams processors have much reduced compared to the topology
generated when using non-native decoding. There was still an extra unncessary processor in the
topology even when the deserialization done natively. Addressing that issue.

With this change, the topology generated is equivalent to the native Kafka Streams applications.

Resolves #682
2019-09-13 15:14:59 -04:00
Soby Chacko
6fdb0001d6 Update SIK to 3.2.0 snapshot
Resolves #744

* Address deprecations in the pollable consumer
* Introduce a property for pollable time out in KafkaConsuerProperties
* Fix tests
* Add docs
* Addressin PR review comments
2019-09-12 15:31:13 -04:00
Gary Russell
f2ab4a07c6 GH-70: Support Batch Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/70
2019-09-11 15:31:18 -04:00
Soby Chacko
584115580b Adding isPresent around optionals 2019-09-11 14:54:11 -04:00
Oleg Zhurakousky
752b509374 Minor cleanup and polishing
Resolves #740
2019-09-10 13:34:30 +02:00
Soby Chacko
96062b23f2 State store beans registered with StreamListener
When StreamListener based processors used in Kafka Streams
applications, it is not possible to register state store beans
using StateStoreBuilder. This is allowed in the functional model.
This method of providing custom state stores is desired as this
gives the user more flexibility in configuring Serde's and other
properties on the state store. Adding this feature to StreamListner
based Kafka Streams processors.

Adding test to verify.

Adding docs.

Resolves #676
2019-09-10 13:34:30 +02:00
Oleg Zhurakousky
da6145f382 Merge pull request #738 from sobychacko/gh-681
Multi binder issues for Kafka Streams table types
2019-09-10 12:45:04 +02:00
Soby Chacko
39a5b25b9c Topic properties not applied on kstream binding
On KStream producer binding, topic properties are not applied
by the provisioner. This is because the extended properties object
that is passed to producer binding is erroneously overwritten by a
default instance. Addressing this issue.

Resolves #684
2019-09-06 16:28:15 -04:00
Soby Chacko
4e250b34cf Multi binder issues for Kafka Streams table types
When KTable or GlobalKTable binders are used in a multi bindder
environment, it has difficulty finding certain beans/properties.
Addressing this issue.

Adding tests.

Resolves #681
2019-09-04 18:10:04 -04:00
Oleg Zhurakousky
10de84f4fe Simplify isSerdeFromStandardDefaults in KeyValueSerdeResolver
Resolves #737
2019-09-04 16:33:54 +02:00
Soby Chacko
309f588325 Addressing Kafka Streams multiple functions issues
Fixing an issue that causes a race condition when multiple functions
are present in a Kafka Streams application by isolating the responsible
proxy factory per function and not shared.

When multiple Kafka Streams functions are present in an application,
it should be possible to set the application id per function.

When an application provides a bean of type Serde, then the binder
should try to introspect that bean to see if it can be matched for
any inbound or outbound serialization.

Adding tests to verify the changes.

Adding docs.

Resolves #734, #735, #736
2019-09-03 19:48:21 -04:00
Oleg Zhurakousky
ca8e086d4c Merge pull request #728 from sobychacko/gh-706
Introducing retry for finding state stores.
2019-08-28 13:37:06 +02:00
Oleg Zhurakousky
cce392aa94 Merge pull request #730 from sobychacko/gh-657
Additional docs for dlq producer properties
2019-08-28 13:36:30 +02:00
Soby Chacko
e53b0f0de9 Fixing Kafka streams binder health indicator tests
Resolves #731
2019-08-28 13:34:34 +02:00
Soby Chacko
413cc4d2b5 Addressing PR review 2019-08-27 17:37:33 -04:00
Soby Chacko
94f76b903f Additional docs for dlq producer properties
Resolves #657
2019-08-26 17:08:01 -04:00
Soby Chacko
8d5f794461 Remove the sleep added in the previous commit 2019-08-26 13:55:52 -04:00
Soby Chacko
00c13c0035 Fixing checkstyle issues 2019-08-26 12:20:39 -04:00
Soby Chacko
e1e46226d5 Fixing a test race condition
The test testDlqWithNativeSerializationEnabledOnDlqProducer fails on CI occasionally.
Adding a sleep of 1 second to give the retrying mechanism enough time to complete.
2019-08-26 12:17:11 -04:00
Walliee
46cbeb1804 Add hook to specify sendTimeoutExpression
resolves #724
2019-08-26 10:47:27 -04:00
Soby Chacko
3e4af104b8 Introducing retry for finding state stores.
During application startup, there are cases when state stores
might take longer time to initialize. The call to find the
state store in the binder (InteractiveQueryService) will fail in
those situations. Providing a basic retry mechanism using which
applicaitons can opt-in for this retry.

Adding properties for retrying state store at the binder level.
Adding docs.
Polishing.

Resolves #706
2019-08-23 18:57:36 -04:00
Soby Chacko
16bb3e2f62 Testing topic provisioning properties for KStream
Resolves #684
2019-08-23 18:53:35 -04:00
Soby Chacko
8145ab19fb Fix checkstyle issues 2019-08-21 17:26:31 -04:00
Gary Russell
930d33aeba Fix ConsumerProducerTransactionTests with SK snap 2019-08-21 09:46:17 -04:00
Gary Russell
fe2a398b8b Fix mock producer.close() for latest SK snapshot 2019-08-21 09:38:25 -04:00
Soby Chacko
24e1fc9722 Kafka Streams functional support and autowiring
There are issues when a bean declared as a function in the Kafka Streams
application tries to autowire a bean through method parameter injection.
Addressing these concerns.

Resolves #726
2019-08-20 18:40:04 -04:00
Soby Chacko
245a43c1d8 Update Spring Kafka to 2.3.0 snapshot
Ignore two tests temporarily
Fixing Kafka Streams tests with co-partitioning issues
2019-08-20 17:40:07 -04:00
Soby Chacko
4d1fed63ee Fix checkstyle issues 2019-08-20 15:34:05 -04:00
Oleg Zhurakousky
786bd3ec62 Merge pull request #723 from sobychacko/kstream-fn-detector-changes
Function detector condition Kafka Streams
2019-08-16 18:44:39 +02:00
Soby Chacko
183f21c880 Function detector condition Kafka Streams
Use BeanFactoryUtils.beanNamesForTypeIncludingAncestors instead of
getBean from BeanFactory which forces the bean creation inside the
function detector condition. There was a race condition in which
applications were unable to autowire beans and use them in functions
while the detector condition was creating the beans. This change will
delay the creation of the function bean until it is needed.
2019-08-16 11:54:42 -04:00
Soby Chacko
18737b8fea Unignoring tests in Kafka Streams binder
Polishing tests
2019-08-14 12:57:18 -04:00
buildmaster
840745e593 Going back to snapshots 2019-08-13 07:58:51 +00:00
buildmaster
2df0377acb Update SNAPSHOT to 3.0.0.M3 2019-08-13 07:57:57 +00:00
Oleg Zhurakousky
164948ad33 Bumped spring-kafka and si-kafka snapshots to milestones 2019-08-13 09:43:09 +02:00
Oleg Zhurakousky
a472845cb4 Adjusted docs with recent s-c-build updates 2019-08-13 09:29:03 +02:00
Oleg Zhurakousky
12db5fc20e Ignored failing tests 2019-08-13 09:24:30 +02:00
Soby Chacko
46f1b41832 Interop between bootstrap server configuration
When using the Kafka Streams binder, if the application chooses to
provide bootstrap server configuration through Kafka binder broker
property, then allow that. This way either type of broker config
works in Kafka Streams binder.

Resolves #401
Resolves #720
2019-08-13 08:45:05 +02:00
Soby Chacko
64ca773a4f Kafka Streams application id changes
Generate a random application id for Kafka Streams binder if the user
doesn't set one for the application. This is useful for development
purposes, as it avoids the creation of an explicit application id.
For production workloads, it is highly recommended to explicitly provide
and application id.

The gnerated application id follows a patter where it uses the function bean
name followed by a random UUID string which is followed by the literal appplicaitonId.
In the case of StreamListener, instead of function bean name, it uses the containing class +
StreamListener method name.

If the binder generates the application id, that information will be logged on the console
at startup.

Resolves #718
Resolves #719
2019-08-13 08:43:07 +02:00
Soby Chacko
a92149f121 Adding BiConsumer support for Kafka Streams binder
Resolves #716
Resolves #717
2019-08-13 08:31:50 +02:00
Gary Russell
2721fe4d05 GH-713: Add test for e2e transactional binder
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/713

The problem was already fixed in SIK.

Resolves #714
2019-08-13 08:29:09 +02:00
Soby Chacko
8907693ffb Revamping Kafka Streams docs for functional style
Resolves #703
Resolves #721
Addressing PR review comments

Addressing PR review comments
2019-08-13 08:17:48 +02:00
Oleg Zhurakousky
ca0bfc028f minor cleanup 2019-08-08 20:59:05 +02:00
Soby Chacko
688f05cbc9 Joining two input KStreams
When joining two input KStreams, the binder throws an exceptin.
Fixing this issue by passing the wrapped target KStream from the
proxy object to the adapted StreamListener method.

Resolves #701
2019-08-08 14:19:21 -04:00
Oleg Zhurakousky
acb4180ea3 GH-1767-core changes related to the disconnection of @StreamMessageConverter 2019-08-06 19:14:49 +02:00
Soby Chacko
77e4087871 Handle Deserialization errors when there is a key (#711)
* Handle Deserialization errors when there is a key

Handle DLQ sending on deserialization errors when there is a key in the record.

Resolves #635

* Adding keySerde information in the functional bindings
2019-08-05 13:58:48 -04:00
Gary Russell
0d339e77b8 GH-683: Update docs 2019-08-02 09:30:36 -04:00
Gary Russell
799eb5be28 GH-683: MessageKey header from payload property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/683

If the `messageKeyExpression` references the payload or entire message, add an
interceptor to evaluate the expression before the payload is converted.

The interceptor is not needed when native encoding is in use because the payload
will be unchanged when it reaches the adapter.
2019-08-01 17:00:05 -04:00
Soby Chacko
2c7615db1f Temporarily ignore a test 2019-08-01 14:32:39 -04:00
Gary Russell
b2b1438ad7 Fix resetOffsets for manual partition assignment 2019-07-19 18:47:41 -04:00
Gary Russell
87399fe904 Updates for latest S-K snapshot
- change `TopicPartitionInitialOffset` to `TopicPartitionOffset`
- when resetting with manual assignment, set the SeekPosition earlier rather than
  relying on direct updat of the `ContainerProperties` field
- set `missingTopicsFatal` to `false` in mock test to avoid log messages about missing bootstrap servers
2019-07-17 18:39:53 -04:00
Soby Chacko
f03e3e17c6 Provide a CollectionSerde implementation
Resolves #702
2019-07-11 17:08:25 -04:00
Gary Russell
48aae76ec9 GH-694: Add recordMetadataChannel (send confirm.)
Resolves #694

Also fix deprecation of `ChannelInterceptorAware`.
2019-07-10 10:48:08 -04:00
Soby Chacko
6976bcd3a8 Docs polishing 2019-07-09 15:59:23 -04:00
Gary Russell
912ab14309 GH-689: Support producer-only transactions
Resolves #689
2019-07-09 15:59:23 -04:00
Oleg Zhurakousky
b4fcb791c3 General polishing and clean up after review
Resolves #692
2019-07-09 20:46:01 +02:00
Soby Chacko
ca3f20ff26 Default multiple input/output binding names in the functional support will be separated usiung underscores (_).
For e.g. process_in, process_in_0, process_out_0 etc.

Adding cleanup config to ktable integration tests.
2019-07-08 19:15:02 -04:00
Soby Chacko
97ecf48867 BiFunction support in Kafka Streams binder
* Introduce the ability to provide BiFunction beans as Kafka Streams processors.
* Bypass Spring Cloud Function catalogue for bean lookup by directly querying the bean factory.
* Add test to verify BiFunction behavior.

Resolves #690
2019-07-08 15:01:07 -04:00
Soby Chacko
4d0b62f839 Functional enhancements in Kafka Streams binder
This PR introduces some fundamental changes in the way functional model of Kafka Streams applications
are supported in the binder. For the most part, this PR is comprised of the following changes.

* Remove the usage of EnableBinding in Kafka Streams based Spring Cloud Stream applications where
  a bean of type java.util.function.Function or java.util.function.Consumer is provides (instead of
  a StreamListener). For StreamListener based Kafka Streams applications, EnableBinding is still
  necessary and required.

* Target types (KStream, KTable, GlobalKTable) will be inferred and bound by the binder through
  a new binder specific bindable proxy factory.

* By deault input bindings are named as <functionName>-input for functions with single input
  and <functionName>-input-0...<functionName>-input-n for functions with n-1 number of inputs.

* By deault output bindings are named as <functionName>-output for functions with single output
  and <functionName>-output-0...<functionName>-output-n for functions with n-1 number of outputs.

* If applications prefer custom input binding names, the defaults can be overridden through
  spring.cloud.stream.function.inputBindings.<funcationName>. Similarly if custom outpub binding names
  are needed, then that can be done through spring.cloud.streamfunction.outputBindings.<functionName>.

* Test changes

* Refactoring and polishing

Resolves #688
2019-07-08 15:01:07 -04:00
Oleg Zhurakousky
592b9a02d8 Binder changes related to GH-1729
addressed PR comment

Resolves #670
2019-07-08 17:32:52 +02:00
Soby Chacko
4ca58bea06 Polishing the previous upgrade commit
* Upgrade Kafaka versions used in tests.
 * EmbeddedKafkaRule changes in tests.
 * KafaTransactionTests changes (createProducer expectations in mockito).
 * Kafka Streams bootstrap server now return an ArrayList instead of String.
   Making the necessary changes in the binder to accommodate this.
 * Kafka Streams state store tests - retention window changes.
2019-07-06 14:24:09 -04:00
Jeff Maxwell
108ccbc572 Update kafka.version, spring-kafka.version
Update kafka.version to 2.3.0 and spring-kafka.version to 2.3.0.BUILD-SNAPSHOT

Resolves #696
2019-07-06 14:24:09 -04:00
buildmaster
303679b9f9 Going back to snapshots 2019-07-03 14:56:50 +00:00
buildmaster
35434c680b Update SNAPSHOT to 3.0.0.M2 2019-07-03 14:56:00 +00:00
Oleg Zhurakousky
e382604b04 Change s-i-kafka to 3.2.0.M3 2019-07-03 16:38:05 +02:00
Soby Chacko
bfe9529a51 Topic provisioning - Kafka streams table types
* Topic provisioning for consumers ignores topic.properties settings if binding type is KTable or GlobalKTable
* Modify tests to verify the behavior during KTable/GlobalKTable bindings
* Polishing

Resolves #687
2019-07-02 13:17:25 -04:00
Gary Russell
d8df388d9f GH-423 Add option to use KafkaHeaders.TOPIC Header
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/423

Add an option to override the default binding destination with the value
of the header, if present.
2019-06-18 16:28:38 -04:00
Gary Russell
569344afa6 GH-677: Fix resetOffsets with concurrency
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/677

The logic for resetting offsets only on the initial assignment used a simple
boolean; this is insufficient when concurrency is > 1.

Use a concurrent set instead to determine whether or not a particular topic/partition
has been sought.

Also, change the `initial` argument on `KafkaBindingRebalanceListener.onPartitionsAssigned()`
to be derived from a `ThreadLocal` and add javadocs about retaining the state by partition.

**backport to all supported versions** (Except `KafkaBindingRebalanceListener` which did
not exist before 2.1.x)

polishing
2019-06-18 16:13:56 -04:00
Oleg Zhurakousky
c1e58d187a Polishing KafkaStreamsBinderUtils
Added registerBean(..) instead of registerSingleton

Resolves #675
2019-06-18 16:21:49 +02:00
Soby Chacko
12afaf1144 Update spring-cloud-build to 2.2.0 snapshot
Update Spring Integration Kafka to 3.2.0 snapshot
2019-06-17 18:43:49 -04:00
Soby Chacko
bbb455946e Refactor common code in Kafka Streams binder.
Both StreamListener and the functional model require many code paths
that are common. Refactoring them for better use and readability.

Resolves #674
2019-06-17 18:11:11 -04:00
Soby Chacko
5dac51df62 Infer SerDes used on input/output when possible
When using native de/serializaion (which is now the default), the binder should infer the common Serde's
used on input and output. The Serdes inferred are - Integer, Long, Short, Double, Float, String, byte[] and
Spring Kafka provided JsonSerde.

Resolves #368

Address PR review comments

Addressing PR review comments

Resolves #672
2019-06-17 14:59:13 +02:00
Oleg Zhurakousky
c1db3d950c Added author tag 2019-06-13 20:19:37 +02:00
Vladislav Fefelov
4d59670096 Use single Executor Service in Kafka Binder health indicator
Resolves #665
2019-06-13 20:19:37 +02:00
Oleg Zhurakousky
d213b4ff2c Merge pull request #669 from sobychacko/gh-651
Use native Serde for Kafka Streams binder as the default
2019-06-12 18:35:03 +02:00
buildmaster
66198e345a Going back to snapshots 2019-06-11 12:07:50 +00:00
buildmaster
df8d151878 Update SNAPSHOT to 3.0.0.M1 2019-06-11 12:07:03 +00:00
Oleg Zhurakousky
d96dc8361b Bumped s-c-build to 2.2.0.M2 2019-06-11 13:47:29 +02:00
Soby Chacko
9c88b4d808 Use native Serde for Kafka Streams binder
In Kafka Streams binder, use the native Serde mechanism as the default instead of the framework
provided message conversion on the input and output bindings. This is to align applications
written using Kafka Streams binder more compatible with native concepts and mechanisms.
Users can still disable native encoding and decoding through configuration.

Amend tests to accommodate the flipped configuration.

Resolves #651
2019-06-10 21:17:38 -04:00
Oleg Zhurakousky
94be206651 Updated POMs for 3.0.0.M1 release 2019-06-10 14:49:49 +02:00
iguissouma
4ab6432f23 Use try with resources when creating AdminClient
Use try with resources when creating AdminClient to release all associated resources.
Fixes gh-660
2019-06-04 09:37:35 -04:00
Walliee
24b52809ed Switch back to use DefaultKafkaHeaderMapper
* update spring-kafka to v2.2.5
* use DefaultKafkaHeaderMapper instead of BinderHeaderMapper
* delete BinderHeaderMapper

resolves #652
2019-05-31 22:15:12 -04:00
Soby Chacko
7450d0731d Fixing ignored tests from upgrade 2019-05-28 15:02:06 -04:00
Oleg Zhurakousky
08b41f7396 Upgraded master to 3.0.0 2019-05-28 16:42:05 +02:00
Oleg Zhurakousky
e725a172ba Bumping version to 2.3.0-B-S for master 2019-05-20 06:33:35 -05:00
buildmaster
b7a3511375 Bumping versions to 2.2.1.BUILD-SNAPSHOT after release 2019-05-07 12:52:13 +00:00
buildmaster
d6c06286cd Going back to snapshots 2019-05-07 12:52:13 +00:00
buildmaster
321331103b Update SNAPSHOT to 2.2.0.RELEASE 2019-05-07 12:50:54 +00:00
Oleg Zhurakousky
f549ae069c Prep doc POM instructions for release 2019-05-07 13:38:19 +02:00
Oleg Zhurakousky
2cc60a744d Merge pull request #649 from sobychacko/gh-633
Adding docs for Kafka Streams functional support
2019-05-01 21:42:36 +02:00
Oleg Zhurakousky
2ddc837c1a polish
Resolves #648
2019-05-01 21:38:59 +02:00
Soby Chacko
816a4ec232 Kafka stream outbound content type polishing
Using Jackson to write content type as bytes so that the outgoing
content type string is properly json compatible.
2019-05-01 21:29:14 +02:00
Soby Chacko
1a19eeabec Content type not carried on the outbound (Kafka Streams)
Fixing the issue of content type not propagated as a Kafka header
on the outbound during message serialization by Kafka Streams binder.

For more info, see these links:

https://stackoverflow.com/questions/55796503/spring-cloud-stream-kafka-application-not-generating-messages-with-the-correct-a
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/456#issuecomment-439963419

Resolves #456
2019-05-01 21:29:14 +02:00
Soby Chacko
d386ff7923 Make KeyValueSerdeResolver public
Resolves #644
Resolves #645
2019-05-01 21:25:23 +02:00
Soby Chacko
9644f2dbbf Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636
2019-05-01 21:13:33 +02:00
Oleg Zhurakousky
498998f1a4 polishing
Resolves #643
2019-05-01 21:12:40 +02:00
Soby Chacko
5558f7f7fc Custom state stores with functional Kafka Streams
Introducing the ability to provide custom state stores as regular
Spring beans and use them in the functional model of Kafka Streams binding.

Resolves #642
2019-05-01 21:00:47 +02:00
Oleg Zhurakousky
dec9ff696f polishing
Resolves #637
2019-05-01 20:26:52 +02:00
Soby Chacko
5014ab8fe1 Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636

Filter in only Kafka streams function beans
2019-05-01 20:26:52 +02:00
Sabby Anandan
3d3f02b6cc Switch to KafkaStreamsProcessor
It appears we have refactored to rename the class from `KStreamProcessor` to `KafkaStreamsProcessor` [see a5344655cb (diff-4a8582ee2d07e268f77a89c0633e42f5)], but the docs weren't updated. This commit does exactly that.
2019-04-30 13:23:44 -07:00
Soby Chacko
7790d4b196 Addressing PR review comments 2019-04-29 19:29:46 -04:00
Soby Chacko
602aed8a4d Adding docs for Kafka Streams functional support
Resovles #633
2019-04-27 20:46:27 -04:00
Guillaume Lemont
c1f4cf9dc6 Fix message listener container bean naming
With multiplex topics, topics can be an array. Sine message listener container
bean name is created by calling a toString() on the topics, when it is an array
it creates an unusual bean name. Fixing the issue by creating the bean name by
using destination string directly.
2019-04-26 15:11:20 -04:00
buildmaster
c990140452 Going back to snapshots 2019-04-09 18:10:36 +00:00
buildmaster
383add504a Update SNAPSHOT to 2.2.0.RC1 2019-04-09 18:09:51 +00:00
Oleg Zhurakousky
5f9395a5ec Prepared docs for RC1 2019-04-09 19:23:29 +02:00
Soby Chacko
70eb25d413 Fix failing test 2019-04-09 13:02:57 -04:00
Oleg Zhurakousky
6bc74c6e5c Doc changes related to Rabbit's 'GH-193 Added note on default properties' 2019-04-09 18:56:15 +02:00
Soby Chacko
33603c62f0 Transactional binder producer factory
With a transactional binder, the producer factory should not be destroyed.

Resolves #626
2019-04-08 15:27:06 -04:00
Anshul Mehra
efd46835a1 GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration (#562)
* GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration

Resolves #525

* Update warning messages to be more explicit
2019-04-08 12:44:52 -04:00
Oleg Zhurakousky
3a267bc751 Merge pull request #620 from sobychacko/gh-589
Bean name conflicts (Kafka Streams binder)
2019-03-28 18:33:01 +01:00
Soby Chacko
62e98df0c7 Bean name conflicts (Kafka Streams binder)
When two processors with same name are present in the same application,
there is a bean creation conflict. Fixing that issue.

Add test to verify.
Modify existing tests.

Resolves #589
2019-03-27 16:30:22 -04:00
Matthieu Ghilain
9e156911b4 Fixing typo in documentation
Resolves #619
2019-03-26 18:49:49 +01:00
Oleg Zhurakousky
cd28454818 Updated docs pom back to snapshot 2019-03-25 18:46:05 +01:00
Oleg Zhurakousky
172d469faa Updated spring-doc-resources.version 2019-03-25 16:46:25 +01:00
Oleg Zhurakousky
74cb25a56a Prepared docs POM for M1 2019-03-25 15:39:26 +01:00
Oleg Zhurakousky
7dd4b66f58 Upgraded to s-c-build 2.1.4 2019-03-25 15:30:05 +01:00
Oleg Zhurakousky
908fb77a88 Merge pull request #586 from spring-operator/polish-urls-apache-license-master
URL Cleanup
2019-03-25 14:55:11 +01:00
Soby Chacko
5660c7cf76 Listened partitions assertions
Avoid unnecessary assertions on listened parttions when
autorebalancing is enabled, but no listened partitions are found.

Polish concurrency assignment when listened partition are empty

polishing the provisioner

Resolves #512
Resolves #587
2019-03-25 13:40:35 +01:00
Spring Operator
f8c1fb45a6 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://www.apache.org/licenses/ with 1 occurrences migrated to:
  https://www.apache.org/licenses/ ([https](https://www.apache.org/licenses/) result 200).
* [ ] http://www.apache.org/licenses/LICENSE-2.0 with 105 occurrences migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
2019-03-21 13:24:11 -05:00
Oleg Zhurakousky
73d3d79651 Minore cleanup and refactorings after previous commit
Removed KafkaStreamsFunctionProperties in favor of StreamFunctionProperties provided by core module

Resolves #537
2019-03-21 17:06:27 +01:00
Soby Chacko
792705d304 Fixing merge conflicts
Addressing checkstyle issues
Test fixes
Work around for Kafka Streams functions unnecessarily getting wrapped inside a FluxFunction.
2019-03-20 17:25:35 -04:00
Soby Chacko
0a48999e3a Initial implementation for writing Kafka Streams applications using
a functional programming model. Simple Kafka Streams processors can
be written using java.util.Function or java.util.Consumer using this
approach.

For example, beans can be defined like the following:

@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
...
}
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process() {
...
}
@Bean
public Consumer<KStream<String,Long>> process(){
...
}
etc.

Adding tests to verify the new programming model.

Resolves #428
2019-03-20 17:25:35 -04:00
Spring Operator
b7b93c1352 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed But Review Recommended
These URLs were fixed, but the https status was not OK. However, the https status was the same as the http request or http redirected to an https URL, so they were migrated. Your review is recommended.

* [ ] http://compose.docker.io/ (UnknownHostException) with 2 occurrences migrated to:
  https://compose.docker.io/ ([https](https://compose.docker.io/) result UnknownHostException).

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://docs.confluent.io/2.0.0/kafka/security.html with 2 occurrences migrated to:
  https://docs.confluent.io/2.0.0/kafka/security.html ([https](https://docs.confluent.io/2.0.0/kafka/security.html) result 200).
* [ ] http://github.com/ with 3 occurrences migrated to:
  https://github.com/ ([https](https://github.com/) result 200).
* [ ] http://kafka.apache.org/090/documentation.html with 4 occurrences migrated to:
  https://kafka.apache.org/090/documentation.html ([https](https://kafka.apache.org/090/documentation.html) result 200).
* [ ] http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html with 2 occurrences migrated to:
  https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html ([https](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html) result 200).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/) result 301).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/) result 301).
* [ ] http://docs.spring.io/spring-kafka/reference/html/_reference.html with 1 occurrences migrated to:
  https://docs.spring.io/spring-kafka/reference/html/_reference.html ([https](https://docs.spring.io/spring-kafka/reference/html/_reference.html) result 301).
* [ ] http://plugins.jetbrains.com/plugin/6546 with 2 occurrences migrated to:
  https://plugins.jetbrains.com/plugin/6546 ([https](https://plugins.jetbrains.com/plugin/6546) result 301).
* [ ] http://raw.github.com/ with 1 occurrences migrated to:
  https://raw.github.com/ ([https](https://raw.github.com/) result 301).
* [ ] http://eclipse.org with 2 occurrences migrated to:
  https://eclipse.org ([https](https://eclipse.org) result 302).
* [ ] http://eclipse.org/m2e/ with 4 occurrences migrated to:
  https://eclipse.org/m2e/ ([https](https://eclipse.org/m2e/) result 302).
* [ ] http://www.springsource.com/developer/sts with 2 occurrences migrated to:
  https://www.springsource.com/developer/sts ([https](https://www.springsource.com/developer/sts) result 302).
2019-03-20 17:25:12 -04:00
Spring Operator
718cb44de7 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://cloud.spring.io/ with 1 occurrences migrated to:
  https://cloud.spring.io/ ([https](https://cloud.spring.io/) result 200).
* http://cloud.spring.io/spring-cloud-static/ with 1 occurrences migrated to:
  https://cloud.spring.io/spring-cloud-static/ ([https](https://cloud.spring.io/spring-cloud-static/) result 200).
* http://maven.apache.org/xsd/maven-4.0.0.xsd with 6 occurrences migrated to:
  https://maven.apache.org/xsd/maven-4.0.0.xsd ([https](https://maven.apache.org/xsd/maven-4.0.0.xsd) result 200).
* http://repo.spring.io/libs-milestone-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-snapshot-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release with 1 occurrences migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0 with 12 occurrences
* http://www.w3.org/2001/XMLSchema-instance with 6 occurrences
2019-03-20 09:47:51 -04:00
Soby Chacko
de78bfa00b Remove unused setter
Removing an integer version of required acks from KafkaBinderConfigurationProperties

Resolves #558
2019-03-15 15:02:46 -04:00
Oleg Zhurakousky
27bb33f9e3 adjusting for home.html 2019-03-14 20:25:46 +01:00
Oleg Zhurakousky
ecd8cc587c polishing docs 2019-03-14 20:14:02 +01:00
Oleg Zhurakousky
f506da758f Upgraded doc resources version 2019-03-14 19:45:47 +01:00
Oleg Zhurakousky
12a528fd88 Polishing docs styles 2019-03-14 19:19:42 +01:00
Spring Operator
4b138c4a2f URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch migrated to:
  https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch ([https](https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch) result 200).
* http://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin migrated to:
  https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin ([https](https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin) result 200).
* http://www.apache.org/licenses/LICENSE-2.0 migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
* http://projects.spring.io/spring-cloud migrated to:
  https://projects.spring.io/spring-cloud ([https](https://projects.spring.io/spring-cloud) result 301).
* http://www.spring.io migrated to:
  https://www.spring.io ([https](https://www.spring.io) result 301).
* http://repo.spring.io/libs-milestone-local migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-release-local migrated to:
  https://repo.spring.io/libs-release-local ([https](https://repo.spring.io/libs-release-local) result 302).
* http://repo.spring.io/libs-snapshot-local migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0
* http://maven.apache.org/xsd/maven-4.0.0.xsd
* http://www.w3.org/2001/XMLSchema-instance
2019-03-13 18:38:53 -04:00
Oleg Zhurakousky
0af784aaa9 GH-559 Initial migration of docs to new style
Resolves #559
2019-03-13 16:40:45 +01:00
Soby Chacko
0deb8754bf Remove producer partitioned property
On the producer side, partitioned is a derived property and the applications do not
have to set this explicitly. Remove any explicit references to it from the docs.

Fixes #542
2019-03-05 13:49:23 -05:00
Soby Chacko
95a4681d27 KTable input validation
KTable as input binding doesn't work if @input is not specified at parameter level.
Restructuring the input validation in Kafka Streams binder where it checks for declarative
inputs.

Fixes #536
2019-03-04 16:20:18 -05:00
Soby Chacko
b4a2950acd KafkaStreamsStateStore with multiple input bindings
When KafkaStreamsStateStore annotation is used on a method with multiple input bindings,
it throws an exception. The reason is that each successive input binding after the first one
is trying to recreate the store that is already created. Fixing this issue.

Resolves #551
2019-02-28 16:31:38 -05:00
Gary Russell
c5c81f8148 SGH-1616: Add MessageSourceCustomizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1616
2019-02-28 13:44:14 -05:00
Arnaud Jardiné
d80e66d9b8 Actuator health for Kafka-streams binder
Add documentation about health indicator

Fix failing tests + add tests for multiple Kafka streams

Polishing

Resolves #544
2019-02-14 16:26:56 -05:00
Soby Chacko
86c9704ef4 Fix metrics with multi binders
Kafka binder metrics is broken with a multi-binder configuraiton.
Fixing the issues by propagating the MeterRegistry bean into the
binder context from parent.

Adding test to verify.

Removing the formatter plugin from the parent pom.

Resolves #546
Resolves #549
2019-02-14 09:25:11 +01:00
Gary Russell
63a4acda1b Fix test for SK 2.2.4
Override deserializer getters in test consumer factory because the
defaults incorrectly throw an `UnsupportedOperationException`.

See https://github.com/spring-projects/spring-kafka/pull/963
2019-02-13 16:28:13 -05:00
Gary Russell
31a6f5d7e3 GH-531: Fix test
- resolve conflict with another test that used `EnableBinding(Sink.class)`.
2019-02-07 12:44:32 -05:00
Aldo Sinanaj
4d02c38f70 GH-531: Support tombstones on input/output
GH-531: Set test timeout to 10 seconds

GH-531: Polishing - Fix @StreamListener

Added test for `@StreamListener`.
2019-02-07 09:01:54 -05:00
Oleg Zhurakousky
c22c08e259 GH-1601 satellite changes to the core 2019-02-05 07:08:48 +01:00
Soby Chacko
02e1fec3b4 Checkstyle upgrade
Upgrade checkstyle plugin to use the rules from spring-cloud-build.
Fix all the new checkstyle errors from the new rules.

Resolves #540
2019-02-04 18:11:10 -05:00
Soby Chacko
fb91b2aff9 Temporarily disabling the checkstyle plugin 2019-02-04 12:27:29 -05:00
Oleg Zhurakousky
a881b9a54a Created 2.2.x branch 2019-02-04 10:46:24 +01:00
Aldo Sinanaj
fc92a6b0af GH-389: Topic props: Use .topic instead of .admin
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/389

GH-389: Deprecated [producer/consumer].admin in favor of topic property

GH-389: Fixed comments as suggest by Gary

Polishing - 2 more deprecation warning suppressions
2019-01-29 15:16:38 -05:00
Gary Russell
d65e8ff59d GH-529: Add lz4 and docs for zstd compression
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/529

- Also log exception when getting partition information
2019-01-16 12:49:12 -05:00
buildmaster
9948674f30 Bumping versions to 2.1.1.BUILD-SNAPSHOT after release 2019-01-08 11:53:04 +00:00
buildmaster
cfc1e0e212 Going back to snapshots 2019-01-08 11:53:04 +00:00
buildmaster
4bd206f37f Update SNAPSHOT to 2.1.0.RELEASE 2019-01-08 11:51:00 +00:00
Oleg Zhurakousky
9d1d90e608 Upgraded to spring-kafka 2.2.2 2019-01-07 16:25:23 +01:00
Soby Chacko
0b8a760b5a Fix NPE in Kafka Streams binder
Fixing an NPE when type for the binder is not explicitly provided as a configuration.

Resolves #516
Resolves #524
Polishing
2019-01-03 20:26:10 +01:00
Gary Russell
d63f9e5fa6 GH-521: Fix pollable source client id
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/521

Previously, all pollable message sources got the client id `message.source`.
MBean registration failed with a warning when multiple pollable sources were present.

Use the binding name as the client id by default, overridable using the `client.id`
consumer property.

**cherry-pick to 2.0.x**

Resolves #523
2019-01-02 20:09:49 +01:00
buildmaster
c51d7e0613 Going back to snapshots 2018-12-20 19:44:59 +00:00
168 changed files with 18816 additions and 5160 deletions

1
.gitignore vendored
View File

@@ -23,3 +23,4 @@ _site/
dump.rdb
.apt_generated
artifacts
.sts4-cache

117
.mvn/wrapper/MavenWrapperDownloader.java vendored Normal file
View File

@@ -0,0 +1,117 @@
/*
* Copyright 2007-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
import java.util.Properties;
public class MavenWrapperDownloader {
private static final String WRAPPER_VERSION = "0.5.6";
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/"
+ WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
* use instead of the default one.
*/
private static final String MAVEN_WRAPPER_PROPERTIES_PATH =
".mvn/wrapper/maven-wrapper.properties";
/**
* Path where the maven-wrapper.jar will be saved to.
*/
private static final String MAVEN_WRAPPER_JAR_PATH =
".mvn/wrapper/maven-wrapper.jar";
/**
* Name of the property which should be used to override the default download url for the wrapper.
*/
private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl";
public static void main(String args[]) {
System.out.println("- Downloader started");
File baseDirectory = new File(args[0]);
System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath());
// If the maven-wrapper.properties exists, read it and check if it contains a custom
// wrapperUrl parameter.
File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH);
String url = DEFAULT_DOWNLOAD_URL;
if(mavenWrapperPropertyFile.exists()) {
FileInputStream mavenWrapperPropertyFileInputStream = null;
try {
mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile);
Properties mavenWrapperProperties = new Properties();
mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream);
url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url);
} catch (IOException e) {
System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'");
} finally {
try {
if(mavenWrapperPropertyFileInputStream != null) {
mavenWrapperPropertyFileInputStream.close();
}
} catch (IOException e) {
// Ignore ...
}
}
}
System.out.println("- Downloading from: " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if(!outputFile.getParentFile().exists()) {
if(!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
try {
downloadFileFromURL(url, outputFile);
System.out.println("Done");
System.exit(0);
} catch (Throwable e) {
System.out.println("- Error downloading");
e.printStackTrace();
System.exit(1);
}
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) {
String username = System.getenv("MVNW_USERNAME");
char[] password = System.getenv("MVNW_PASSWORD").toCharArray();
Authenticator.setDefault(new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, password);
}
});
}
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
}
}

Binary file not shown.

View File

@@ -1 +1,2 @@
distributionUrl=https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.3/apache-maven-3.3.3-bin.zip
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip
wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar

View File

@@ -21,7 +21,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -29,7 +29,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -37,7 +37,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -47,7 +47,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -55,7 +55,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,6 +1,6 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -192,7 +192,7 @@
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,4 +1,9 @@
// Do not edit this file (e.g. go instead to src/main/asciidoc)
////
DO NOT EDIT THIS FILE. IT WAS GENERATED.
Manual changes to this file will be lost when it is generated again.
Edit the files in the src/main/asciidoc/ directory instead.
////
:jdkversion: 1.8
:github-tag: master
@@ -35,7 +40,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -56,7 +61,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -65,7 +70,7 @@ Also, 0.11.x.x does not support the `autoAddPartitions` property.
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation.
==== Kafka Binder Properties
@@ -122,13 +127,17 @@ spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`.
Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas.
Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum.
+
Default: `-1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -151,33 +160,41 @@ Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader::
Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader.
+
Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
@@ -192,6 +209,8 @@ By default, offsets are committed after all records in the batch of records retu
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
This property is deprecated as of 3.1 in favor of using `ackMode`.
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
+
Default: `false`.
autoCommitOffset::
@@ -200,9 +219,14 @@ If set to `false`, a header with the key `kafka_acknowledgment` of the type `org
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
Also see `ackEachRecord`. This property is deprecated as of 3.1. See `ackMode` for more details.
+
Default: `true`.
ackMode::
Specify the container ack mode.
This is based on the AckMode enumeration defined in Spring Kafka.
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
@@ -225,17 +249,29 @@ Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
dlqName::
@@ -245,6 +281,8 @@ Default: null (If not specified, messages that result in errors are forwarded to
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
@@ -270,30 +308,62 @@ Note, the time taken to detect new topics that match the pattern is controlled b
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See `NewTopic` javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning new topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
@@ -303,6 +373,13 @@ sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
@@ -310,7 +387,13 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
In the case of a regular processor (`Function<String, String>` or `Function<Message<?>, Message<?>`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below.
`spring.cloud.stream.kafka.bindings.<output-binding-name>.producer.messageKeyExpression: headers['kafka_receivedMessageKey']`
There is an important caveat to keep in mind for reactive functions.
In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages.
You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all.
+
Default: `none`.
headerPatterns::
@@ -324,8 +407,38 @@ For example `!ask,as*` will pass `ash` but not `ask`.
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -333,6 +446,31 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
closeTimeout::
Timeout in number of seconds to wait for when closing the producer.
+
Default: `30`
allowNonTransactional::
Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process.
This property allows you to override that behavior.
If set to true, records published to this output binding will not be run in a transaction, unless one is already in process.
+
Default: `false`
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
@@ -368,7 +506,7 @@ public class ManuallyAcknowdledgingConsumer {
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
@@ -380,7 +518,7 @@ spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
@@ -470,34 +608,88 @@ The following simple application shows how to pause and resume:
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
@@ -513,9 +705,25 @@ You can consume these exceptions with your own Spring Integration flow.
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
The metrics provided are based on the Micrometer library.
The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application.
The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application.
```
@Component
class NoOpBindingMeters {
NoOpBindingMeters(MeterRegistry registry) {
registry.config().meterFilter(
MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME));
}
}
```
More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here].
[[kafka-tombstones]]
=== Tombstone Records (null record values)
@@ -545,39 +753,39 @@ Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
}
----
@@ -585,6 +793,21 @@ public interface KafkaBindingRebalanceListener {
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsusumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method.
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
= Appendices
[appendix]
[[building]]
@@ -623,7 +846,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -632,13 +855,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".
@@ -691,7 +914,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -704,8 +927,8 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).
// ======================================================================================
// ======================================================================================

View File

@@ -1,49 +1,62 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC4</version>
<version>3.1.0</version>
</parent>
<packaging>pom</packaging>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs</description>
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<maven.plugin.plugin.version>3.4</maven.plugin.plugin.version>
<configprops.inclusionPattern>.*stream.*</configprops.inclusionPattern>
<upload-docs-zip.phase>deploy</upload-docs-zip.phase>
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/asciidoc</sourceDirectory>
</build>
<profiles>
<profile>
<id>docs</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<inherited>false</inherited>
</plugin>
<plugin>
<groupId>com.agilejava.docbkx</groupId>
<artifactId>docbkx-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<inherited>false</inherited>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<inherited>false</inherited>
<artifactId>maven-deploy-plugin</artifactId>
</plugin>
</plugins>
</build>

View File

@@ -0,0 +1,68 @@
|===
|Name | Default | Description
|spring.cloud.stream.binders | | Additional per-binder properties (see {@link BinderProperties}) if more then one binder of the same type is used (i.e., connect to multiple instances of RabbitMq). Here you can specify multiple binder configurations, each with different environment settings. For example; spring.cloud.stream.binders.rabbit1.environment. . . , spring.cloud.stream.binders.rabbit2.environment. . .
|spring.cloud.stream.binding-retry-interval | `30` | Retry interval (in seconds) used to schedule binding attempts. Default: 30 sec.
|spring.cloud.stream.bindings | | Additional binding properties (see {@link BinderProperties}) per binding name (e.g., 'input`). For example; This sets the content-type for the 'input' binding of a Sink application: 'spring.cloud.stream.bindings.input.contentType=text/plain'
|spring.cloud.stream.default-binder | | The name of the binder to use by all bindings in the event multiple binders available (e.g., 'rabbit').
|spring.cloud.stream.dynamic-destination-cache-size | `10` | The maximum size of Least Recently Used (LRU) cache of dynamic destinations. Once this size is reached, new destinations will trigger the removal of old destinations. Default: 10
|spring.cloud.stream.dynamic-destinations | `[]` | A list of destinations that can be bound dynamically. If set, only listed destinations can be bound.
|spring.cloud.stream.function.batch-mode | `false` |
|spring.cloud.stream.function.bindings | |
|spring.cloud.stream.function.definition | | Definition of functions to bind. If several functions need to be composed into one, use pipes (e.g., 'fooFunc\|barFunc')
|spring.cloud.stream.instance-count | `1` | The number of deployed instances of an application. Default: 1. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-count" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index | `0` | The instance id of the application: a number from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index-list | | A list of instance id's from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index-list" where 'foo' is the name of the binding. This setting will override the one set in 'spring.cloud.stream.instance-index'
|spring.cloud.stream.integration.message-handler-not-propagated-headers | | Message header names that will NOT be copied from the inbound message.
|spring.cloud.stream.kafka.binder.authorization-exception-retry-interval | | Time between retries after AuthorizationException is caught in the ListenerContainer; defalt is null which disables retries. For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
|spring.cloud.stream.kafka.binder.auto-add-partitions | `false` |
|spring.cloud.stream.kafka.binder.auto-alter-topics | `false` |
|spring.cloud.stream.kafka.binder.auto-create-topics | `true` |
|spring.cloud.stream.kafka.binder.brokers | `[localhost]` |
|spring.cloud.stream.kafka.binder.certificate-store-directory | | When a certificate store location is given as classpath URL (classpath:), then the binder moves the resource from the classpath location inside the JAR to a location on the filesystem. If this value is set, then this location is used, otherwise, the certificate file is copied to the directory returned by java.io.tmpdir.
|spring.cloud.stream.kafka.binder.configuration | | Arbitrary kafka properties that apply to both producers and consumers.
|spring.cloud.stream.kafka.binder.consider-down-when-any-partition-has-no-leader | `false` |
|spring.cloud.stream.kafka.binder.consumer-properties | | Arbitrary kafka consumer properties.
|spring.cloud.stream.kafka.binder.header-mapper-bean-name | | The bean name of a custom header mapper to use instead of a {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
|spring.cloud.stream.kafka.binder.headers | `[]` |
|spring.cloud.stream.kafka.binder.health-timeout | `60` | Time to wait to get partition information in seconds; default 60.
|spring.cloud.stream.kafka.binder.jaas | |
|spring.cloud.stream.kafka.binder.min-partition-count | `1` |
|spring.cloud.stream.kafka.binder.producer-properties | | Arbitrary kafka producer properties.
|spring.cloud.stream.kafka.binder.replication-factor | `-1` |
|spring.cloud.stream.kafka.binder.required-acks | `1` |
|spring.cloud.stream.kafka.binder.transaction.producer.batch-timeout | |
|spring.cloud.stream.kafka.binder.transaction.producer.buffer-size | |
|spring.cloud.stream.kafka.binder.transaction.producer.compression-type | |
|spring.cloud.stream.kafka.binder.transaction.producer.configuration | |
|spring.cloud.stream.kafka.binder.transaction.producer.error-channel-enabled | |
|spring.cloud.stream.kafka.binder.transaction.producer.header-mode | |
|spring.cloud.stream.kafka.binder.transaction.producer.header-patterns | |
|spring.cloud.stream.kafka.binder.transaction.producer.message-key-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-count | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-key-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-key-extractor-name | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-selector-expression | |
|spring.cloud.stream.kafka.binder.transaction.producer.partition-selector-name | |
|spring.cloud.stream.kafka.binder.transaction.producer.required-groups | |
|spring.cloud.stream.kafka.binder.transaction.producer.sync | |
|spring.cloud.stream.kafka.binder.transaction.producer.topic | |
|spring.cloud.stream.kafka.binder.transaction.producer.use-native-encoding | |
|spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix | |
|spring.cloud.stream.kafka.bindings | |
|spring.cloud.stream.metrics.export-properties | | List of properties that are going to be appended to each message. This gets populate by onApplicationEvent, once the context refreshes to avoid overhead of doing per message basis.
|spring.cloud.stream.metrics.key | | The name of the metric being emitted. Should be an unique value per application. Defaults to: ${spring.application.name:${vcap.application.name:${spring.config.name:application}}}.
|spring.cloud.stream.metrics.meter-filter | | Pattern to control the 'meters' one wants to capture. By default all 'meters' will be captured. For example, 'spring.integration.*' will only capture metric information for meters whose name starts with 'spring.integration'.
|spring.cloud.stream.metrics.properties | | Application properties that should be added to the metrics payload For example: `spring.application**`.
|spring.cloud.stream.metrics.schedule-interval | `60s` | Interval expressed as Duration for scheduling metrics snapshots publishing. Defaults to 60 seconds
|spring.cloud.stream.override-cloud-connectors | `false` | This property is only applicable when the cloud profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder detects a suitable bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and uses it for creating connections (usually through Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.
|spring.cloud.stream.pollable-source | `none` | A semi-colon delimited list of binding names of pollable sources. Binding names follow the same naming convention as functions. For example, name '...pollable-source=foobar' will be accessible as 'foobar-iin-0'' binding
|spring.cloud.stream.poller.cron | | Cron expression value for the Cron Trigger.
|spring.cloud.stream.poller.fixed-delay | `1000` | Fixed delay for default poller.
|spring.cloud.stream.poller.initial-delay | `0` | Initial delay for periodic triggers.
|spring.cloud.stream.poller.max-messages-per-poll | `1` | Maximum messages per poll for the default poller.
|spring.cloud.stream.poller.time-unit | | The TimeUnit to apply to delay values.
|spring.cloud.stream.sendto.destination | `none` | The name of the header used to determine the name of the output destination
|spring.cloud.stream.source | | A colon delimited string representing the names of the sources based on which source bindings will be created. This is primarily to support cases where source binding may be required without providing a corresponding Supplier. (e.g., for cases where the actual source of data is outside of scope of spring-cloud-stream - HTTP -> Stream)
|===

View File

@@ -34,7 +34,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -43,13 +43,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".

View File

@@ -24,7 +24,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -37,6 +37,6 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -1,12 +1,65 @@
[[kafka-dlq-processing]]
=== Dead-Letter Topic Processing
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
[[dlq-partition-selection]]
==== Dead-Letter Topic Partition Selection
By default, records are published to the Dead-Letter topic using the same partition as the original record.
This means the Dead-Letter topic must have at least as many partitions as the original record.
To change this behavior, add a `DlqPartitionFunction` implementation as a `@Bean` to the application context.
Only one such bean can be present.
The function is provided with the consumer group, the failed `ConsumerRecord` and the exception.
For example, if you always want to route to partition 0, you might use:
====
[source, java]
----
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, record, ex) -> 0;
}
----
====
NOTE: If you set a consumer binding's `dlqPartitions` property to 1 (and the binder's `minPartitionCount` is equal to `1`), there is no need to supply a `DlqPartitionFunction`; the framework will always use partition 0.
If you set a consumer binding's `dlqPartitions` property to a value greater than `1` (or the binder's `minPartitionCount` is greater than `1`), you **must** provide a `DlqPartitionFunction` bean, even if the partition count is the same as the original topic's.
It is also possible to define a custom name for the DLQ topic.
In order to do so, create an implementation of `DlqDestinationResolver` as a `@Bean` to the application context.
When the binder detects such a bean, that takes precedence, otherwise it will use the `dlqName` property.
If neither of these are found, it will default to `error.<destination>.<group>`.
Here is an example of `DlqDestinationResolver` as a `@Bean`.
====
[source]
----
@Bean
public DlqDestinationResolver dlqDestinationResolver() {
return (rec, ex) -> {
if (rec.topic().equals("word1")) {
return "topic1-dlq";
}
else {
return "topic2-dlq";
}
};
}
----
====
One important thing to keep in mind when providing an implementation for `DlqDestinationResolver` is that the provisioner in the binder will not auto create topics for the application.
This is because there is no way for the binder to infer the names of all the DLQ topics the implementation might send to.
Therefore, if you provide DLQ names using this strategy, it is the application's responsibility to ensure that those topics are created beforehand.
[[dlq-handling]]
==== Handling Records in a Dead-Letter Topic
Because the framework cannot anticipate how users would want to dispose of dead-lettered messages, it does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.
The application is another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
It exits when no messages are received for 5 seconds.
The examples assume the original destination is `so8400out` and the consumer group is `so8400`.
@@ -25,10 +78,8 @@ spring.cloud.stream.bindings.input.group=so8400replay
spring.cloud.stream.bindings.input.destination=error.so8400out.so8400
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.output.producer.partitioned=true
spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot
spring.cloud.stream.bindings.parkingLot.producer.partitioned=true
spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest
@@ -90,7 +141,7 @@ public class ReRouteDlqKApplication implements CommandLineRunner {
int count = this.processed.get();
Thread.sleep(5000);
if (count == this.processed.get()) {
System.out.println("Idle, terminating");
System.out.println("Idle, exiting");
return;
}
}

View File

@@ -4,7 +4,7 @@ set -e
# Set default props like MAVEN_PATH, ROOT_FOLDER etc.
function set_default_props() {
# The script should be executed from the root folder
# The script should be run from the root folder
ROOT_FOLDER=`pwd`
echo "Current folder is ${ROOT_FOLDER}"
@@ -40,7 +40,7 @@ function check_if_anything_to_sync() {
function retrieve_current_branch() {
# Code getting the name of the current branch. For master we want to publish as we did until now
# http://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch
# https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch
# If there is a branch already passed will reuse it - otherwise will try to find it
CURRENT_BRANCH=${BRANCH}
if [[ -z "${CURRENT_BRANCH}" ]] ; then
@@ -65,7 +65,7 @@ function build_docs_if_applicable() {
}
# Get the name of the `docs.main` property
# Get whitelisted branches - assumes that a `docs` module is available under `docs` profile
# Get allowed branches - assumes that a `docs` module is available under `docs` profile
function retrieve_doc_properties() {
MAIN_ADOC_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
@@ -75,14 +75,14 @@ function retrieve_doc_properties() {
echo "Extracted 'main.adoc' from Maven build [${MAIN_ADOC_VALUE}]"
WHITELIST_PROPERTY=${WHITELIST_PROPERTY:-"docs.whitelisted.branches"}
WHITELISTED_BRANCHES_VALUE=$("${MAVEN_PATH}"mvn -q \
ALLOW_PROPERTY=${ALLOW_PROPERTY:-"docs.allowed.branches"}
ALLOWED_BRANCHES_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
-Dexec.args="\${${WHITELIST_PROPERTY}}" \
-Dexec.args="\${${ALLOW_PROPERTY}}" \
org.codehaus.mojo:exec-maven-plugin:1.3.1:exec \
-P docs \
-pl docs)
echo "Extracted '${WHITELIST_PROPERTY}' from Maven build [${WHITELISTED_BRANCHES_VALUE}]"
echo "Extracted '${ALLOW_PROPERTY}' from Maven build [${ALLOWED_BRANCHES_VALUE}]"
}
# Stash any outstanding changes
@@ -147,10 +147,10 @@ function copy_docs_for_current_version() {
COMMIT_CHANGES="yes"
else
echo -e "Current branch is [${CURRENT_BRANCH}]"
# http://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
if [[ ",${WHITELISTED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
# https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
if [[ ",${ALLOWED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
mkdir -p ${ROOT_FOLDER}/${CURRENT_BRANCH}
echo -e "Branch [${CURRENT_BRANCH}] is whitelisted! Will copy the current docs to the [${CURRENT_BRANCH}] folder"
echo -e "Branch [${CURRENT_BRANCH}] is allowed! Will copy the current docs to the [${CURRENT_BRANCH}] folder"
for f in docs/target/generated-docs/*; do
file=${f#docs/target/generated-docs/*}
if ! git ls-files -i -o --exclude-standard --directory | grep -q ^$file$; then
@@ -169,7 +169,7 @@ function copy_docs_for_current_version() {
done
COMMIT_CHANGES="yes"
else
echo -e "Branch [${CURRENT_BRANCH}] is not on the white list! Check out the Maven [${WHITELIST_PROPERTY}] property in
echo -e "Branch [${CURRENT_BRANCH}] is not on the allow list! Check out the Maven [${ALLOW_PROPERTY}] property in
[docs] module available under [docs] profile. Won't commit any changes to gh-pages for this branch."
fi
fi
@@ -250,10 +250,10 @@ the script will work in the following manner:
- if there's no gh-pages / target for docs module then the script ends
- for master branch the generated docs are copied to the root of gh-pages branch
- for any other branch (if that branch is whitelisted) a subfolder with branch name is created
- for any other branch (if that branch is allowed) a subfolder with branch name is created
and docs are copied there
- if the version switch is passed (-v) then a tag with (v) prefix will be retrieved and a folder
with that version number will be created in the gh-pages branch. WARNING! No whitelist verification will take place
with that version number will be created in the gh-pages branch. WARNING! No allow verification will take place
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
switch to gh-pages of that repo and copy the generated docs to `docs/<project-name>/<version>`
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
@@ -327,4 +327,4 @@ build_docs_if_applicable
retrieve_doc_properties
stash_changes
add_docs_from_target
checkout_previous_branch
checkout_previous_branch

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -40,7 +40,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -49,7 +49,7 @@ Also, 0.11.x.x does not support the `autoAddPartitions` property.
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation.
==== Kafka Binder Properties
@@ -106,13 +106,17 @@ spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`.
Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas.
Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum.
+
Default: `-1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -135,33 +139,41 @@ Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader::
Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader.
+
Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
@@ -176,6 +188,8 @@ By default, offsets are committed after all records in the batch of records retu
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
This property is deprecated as of 3.1 in favor of using `ackMode`.
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
+
Default: `false`.
autoCommitOffset::
@@ -184,9 +198,14 @@ If set to `false`, a header with the key `kafka_acknowledgment` of the type `org
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
Also see `ackEachRecord`. This property is deprecated as of 3.1. See `ackMode` for more details.
+
Default: `true`.
ackMode::
Specify the container ack mode.
This is based on the AckMode enumeration defined in Spring Kafka.
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
@@ -209,17 +228,29 @@ Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
dlqName::
@@ -229,6 +260,8 @@ Default: null (If not specified, messages that result in errors are forwarded to
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
@@ -254,30 +287,62 @@ Note, the time taken to detect new topics that match the pattern is controlled b
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See `NewTopic` javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning new topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
@@ -287,6 +352,13 @@ sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
@@ -294,7 +366,13 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
In the case of a regular processor (`Function<String, String>` or `Function<Message<?>, Message<?>`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below.
`spring.cloud.stream.kafka.bindings.<output-binding-name>.producer.messageKeyExpression: headers['kafka_receivedMessageKey']`
There is an important caveat to keep in mind for reactive functions.
In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages.
You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all.
+
Default: `none`.
headerPatterns::
@@ -308,8 +386,38 @@ For example `!ask,as*` will pass `ash` but not `ask`.
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -317,6 +425,31 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
closeTimeout::
Timeout in number of seconds to wait for when closing the producer.
+
Default: `30`
allowNonTransactional::
Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process.
This property allows you to override that behavior.
If set to true, records published to this output binding will not be run in a transaction, unless one is already in process.
+
Default: `false`
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
@@ -352,7 +485,7 @@ public class ManuallyAcknowdledgingConsumer {
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
@@ -364,7 +497,7 @@ spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
@@ -477,11 +610,65 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
@@ -497,9 +684,25 @@ You can consume these exceptions with your own Spring Integration flow.
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
The metrics provided are based on the Micrometer library.
The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application.
The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application.
```
@Component
class NoOpBindingMeters {
NoOpBindingMeters(MeterRegistry registry) {
registry.config().meterFilter(
MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME));
}
}
```
More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here].
[[kafka-tombstones]]
=== Tombstone Records (null record values)
@@ -568,3 +771,18 @@ public interface KafkaBindingRebalanceListener {
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsusumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method.
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.

View File

@@ -49,7 +49,6 @@ spring:
output:
destination: partitioned.topic
producer:
partitioned: true
partition-key-expression: headers['partitionKey']
partition-count: 12
----

View File

@@ -10,7 +10,7 @@
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell, Arnaud Jardiné, Soby Chacko
:doctype: book
:toc:
:toclevels: 4
@@ -21,16 +21,20 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:spring-cloud-stream-binder-kafka-docs: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: http://raw.github.com/{github-repo}/{github-tag}
:github-code: http://github.com/{github-repo}/tree/{github-tag}
:github-wiki: http://github.com/{github-repo}/wiki
:github-master-code: http://github.com/{github-repo}/tree/master
:github-raw: https://raw.github.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:github-wiki: https://github.com/{github-repo}/wiki
:github-master-code: https://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
*{project-version}*
= Reference Guide
include::overview.adoc[]

189
mvnw vendored
View File

@@ -19,7 +19,7 @@
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven2 Start Up Batch script
# Maven Start Up Batch script
#
# Required ENV vars:
# ------------------
@@ -54,38 +54,16 @@ case "`uname`" in
CYGWIN*) cygwin=true ;;
MINGW*) mingw=true;;
Darwin*) darwin=true
#
# Look for the Apple JDKs first to preserve the existing behaviour, and then look
# for the new JDKs provided by Oracle.
#
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Java/JavaVirtualMachines/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L "/Library/Java/JavaVirtualMachines/CurrentJDK" ] ; then
#
# Oracle JDKs
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -x "/usr/libexec/java_home" ]; then
#
# Apple JDKs
#
export JAVA_HOME=`/usr/libexec/java_home`
fi
;;
# Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
# See https://developer.apple.com/library/mac/qa/qa1170/_index.html
if [ -z "$JAVA_HOME" ]; then
if [ -x "/usr/libexec/java_home" ]; then
export JAVA_HOME="`/usr/libexec/java_home`"
else
export JAVA_HOME="/Library/Java/Home"
fi
fi
;;
esac
if [ -z "$JAVA_HOME" ] ; then
@@ -130,13 +108,12 @@ if $cygwin ; then
CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
# For Migwn, ensure paths are in UNIX format before anything is touched
# For Mingw, ensure paths are in UNIX format before anything is touched
if $mingw ; then
[ -n "$M2_HOME" ] &&
M2_HOME="`(cd "$M2_HOME"; pwd)`"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
# TODO classpath?
fi
if [ -z "$JAVA_HOME" ]; then
@@ -184,27 +161,28 @@ fi
CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
fi
# traverses directory structure from process work directory to filesystem root
# first directory with .mvn subdirectory is considered project base directory
find_maven_basedir() {
local basedir=$(pwd)
local wdir=$(pwd)
if [ -z "$1" ]
then
echo "Path not specified to find_maven_basedir"
return 1
fi
basedir="$1"
wdir="$1"
while [ "$wdir" != '/' ] ; do
if [ -d "$wdir"/.mvn ] ; then
basedir=$wdir
break
fi
wdir=$(cd "$wdir/.."; pwd)
# workaround for JBEAP-8937 (on Solaris 10/Sparc)
if [ -d "${wdir}" ]; then
wdir=`cd "$wdir/.."; pwd`
fi
# end of workaround
done
echo "${basedir}"
}
@@ -216,9 +194,108 @@ concat_lines() {
fi
}
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-$(find_maven_basedir)}
BASE_DIR=`find_maven_basedir "$(pwd)"`
if [ -z "$BASE_DIR" ]; then
exit 1;
fi
##########################################################################################
# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
# This allows using the maven wrapper in projects that prohibit checking in binary data.
##########################################################################################
if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found .mvn/wrapper/maven-wrapper.jar"
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
if [ -n "$MVNW_REPOURL" ]; then
jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
else
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
fi
while IFS="=" read key value; do
case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
esac
done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
if [ "$MVNW_VERBOSE" = true ]; then
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if $cygwin; then
wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"`
fi
if command -v wget > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
wget "$jarUrl" -O "$wrapperJarPath"
else
wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath"
fi
elif command -v curl > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
curl -o "$wrapperJarPath" "$jarUrl" -f
else
curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
# For Cygwin, switch paths to Windows format before running javac
if $cygwin; then
javaClass=`cygpath --path --windows "$javaClass"`
fi
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Compiling MavenWrapperDownloader.java ..."
fi
# Compiling the Java class
("$JAVA_HOME/bin/javac" "$javaClass")
fi
if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
# Running the downloader
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Running MavenWrapperDownloader.java ..."
fi
("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
fi
fi
fi
fi
##########################################################################################
# End of extension
##########################################################################################
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
if [ "$MVNW_VERBOSE" = true ]; then
echo $MAVEN_PROJECTBASEDIR
fi
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
[ -n "$MAVEN_PROJECTBASEDIR" ] &&
MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
fi
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
@@ -226,20 +303,8 @@ export MAVEN_CMD_LINE_ARGS
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
echo "Running version check"
VERSION=$( sed '\!<parent!,\!</parent!d' `dirname $0`/pom.xml | grep '<version' | head -1 | sed -e 's/.*<version>//' -e 's!</version>.*$!!' )
echo "The found version is [${VERSION}]"
if echo $VERSION | egrep -q 'M|RC'; then
echo Activating \"milestone\" profile for version=\"$VERSION\"
echo $MAVEN_ARGS | grep -q milestone || MAVEN_ARGS="$MAVEN_ARGS -Pmilestone"
else
echo Deactivating \"milestone\" profile for version=\"$VERSION\"
echo $MAVEN_ARGS | grep -q milestone && MAVEN_ARGS=$(echo $MAVEN_ARGS | sed -e 's/-Pmilestone//')
fi
exec "$JAVACMD" \
$MAVEN_OPTS \
-classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
"-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
${WRAPPER_LAUNCHER} ${MAVEN_ARGS} "$@"
${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"

53
mvnw.cmd vendored
View File

@@ -18,7 +18,7 @@
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven2 Start Up Batch script
@REM Maven Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@@ -26,7 +26,7 @@
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@@ -35,7 +35,9 @@
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
@REM set title of command window
title %0
@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
@@ -80,8 +82,6 @@ goto error
:init
set MAVEN_CMD_LINE_ARGS=%*
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
@@ -117,11 +117,48 @@ for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do s
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="".\.mvn\wrapper\maven-wrapper.jar""
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CMD_LINE_ARGS%
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
if "%MVNW_VERBOSE%" == "true" (
echo Found %WRAPPER_JAR%
)
) else (
if not "%MVNW_REPOURL%" == "" (
SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
)
if "%MVNW_VERBOSE%" == "true" (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
)
powershell -Command "&{"^
"$webclient = new-object System.Net.WebClient;"^
"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
"}"^
"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
"}"
if "%MVNW_VERBOSE%" == "true" (
echo Finished downloading %WRAPPER_JAR%
)
)
@REM End of extension
@REM Provide a "standardized" way to retrieve the CLI args that will
@REM work with both Windows and non-Windows executions.
set MAVEN_CMD_LINE_ARGS=%*
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end

110
pom.xml
View File

@@ -1,21 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC4</version>
<version>3.1.0</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.1.0.RC3</version>
<version>3.0.0</version>
<relativePath />
</parent>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.0.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.RELEASE</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>2.1.0.RC4</spring-cloud-stream.version>
<spring-kafka.version>2.6.3</spring-kafka.version>
<spring-integration-kafka.version>5.4.1</spring-integration-kafka.version>
<kafka.version>2.6.0</kafka.version>
<spring-cloud-schema-registry.version>1.1.0</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.0</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka</module>
@@ -47,13 +51,7 @@
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
@@ -89,7 +87,13 @@
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.13</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
@@ -108,18 +112,37 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<version>${spring-cloud-schema-registry.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
@@ -145,31 +168,6 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-tools</artifactId>
<version>${spring-cloud-stream.version}</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>checkstyle-validation</id>
<phase>validate</phase>
<configuration>
<configLocation>checkstyle.xml</configLocation>
<headerLocation>checkstyle-header.txt</headerLocation>
<suppressionsLocation>checkstyle-suppressions.xml</suppressionsLocation>
<encoding>UTF-8</encoding>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
<includeTestSourceDirectory>true</includeTestSourceDirectory>
</configuration>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
@@ -181,7 +179,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -192,7 +190,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -200,17 +198,25 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -221,7 +227,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -229,7 +235,7 @@
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/libs-release-local</url>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -237,4 +243,12 @@
</pluginRepositories>
</profile>
</profiles>
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</project>

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC4</version>
<version>3.1.0</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>

View File

@@ -1 +0,0 @@
provides: spring-cloud-starter-stream-kafka

View File

@@ -1,18 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC4</version>
<version>3.1.0</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<dependencies>

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -54,7 +54,8 @@ public class JaasLoginModuleConfiguration {
public void setControlFlag(String controlFlag) {
Assert.notNull(controlFlag, "cannot be null");
this.controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.valueOf(controlFlag.toUpperCase());
this.controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag
.valueOf(controlFlag.toUpperCase());
}
public Map<String, String> getOptions() {
@@ -64,4 +65,5 @@ public class JaasLoginModuleConfiguration {
public void setOptions(Map<String, String> options) {
this.options = options;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,32 +16,42 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.time.Duration;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
import org.springframework.core.io.DefaultResourceLoader;
import org.springframework.core.io.Resource;
import org.springframework.expression.Expression;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Configuration properties for the Kafka binder.
* The properties in this class are prefixed with <b>spring.cloud.stream.kafka.binder</b>.
* Configuration properties for the Kafka binder. The properties in this class are
* prefixed with <b>spring.cloud.stream.kafka.binder</b>.
*
* @author David Turanski
* @author Ilayaperumal Gopinathan
@@ -49,18 +59,21 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Gary Russell
* @author Rafal Zukowski
* @author Aldo Sinanaj
* @author Lukasz Kaminski
* @author Chukwubuikem Ume-Ugwa
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
private static final String DEFAULT_KAFKA_CONNECTION_STRING = "localhost:9092";
private final Log logger = LogFactory.getLog(getClass());
private final Transaction transaction = new Transaction();
private final KafkaProperties kafkaProperties;
private String[] zkNodes = new String[] { "localhost" };
/**
* Arbitrary kafka properties that apply to both producers and consumers.
*/
@@ -76,48 +89,26 @@ public class KafkaBinderConfigurationProperties {
*/
private Map<String, String> producerProperties = new HashMap<>();
private String defaultZkPort = "2181";
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
private String[] headers = new String[] {};
private int offsetUpdateTimeWindow = 10000;
private int offsetUpdateCount;
private int offsetUpdateShutdownTimeout = 2000;
private int maxWait = 100;
private boolean autoCreateTopics = true;
private boolean autoAlterTopics;
private boolean autoAddPartitions;
private int socketBufferSize = 2097152;
/**
* ZK session timeout in milliseconds.
*/
private int zkSessionTimeout = 10000;
/**
* ZK Connection timeout in milliseconds.
*/
private int zkConnectionTimeout = 10000;
private boolean considerDownWhenAnyPartitionHasNoLeader;
private String requiredAcks = "1";
private short replicationFactor = 1;
private int fetchSize = 1024 * 1024;
private short replicationFactor = -1;
private int minPartitionCount = 1;
private int queueSize = 8192;
/**
* Time to wait to get partition information in seconds; default 60.
*/
@@ -126,10 +117,25 @@ public class KafkaBinderConfigurationProperties {
private JaasLoginModuleConfiguration jaas;
/**
* The bean name of a custom header mapper to use instead of a {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
* The bean name of a custom header mapper to use instead of a
* {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
*/
private String headerMapperBeanName;
/**
* Time between retries after AuthorizationException is caught in
* the ListenerContainer; defalt is null which disables retries.
* For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
*/
private Duration authorizationExceptionRetryInterval;
/**
* When a certificate store location is given as classpath URL (classpath:), then the binder
* moves the resource from the classpath location inside the JAR to a location on
* the filesystem. If this value is set, then this location is used, otherwise, the
* certificate file is copied to the directory returned by java.io.tmpdir.
*/
private String certificateStoreDirectory;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
@@ -144,19 +150,61 @@ public class KafkaBinderConfigurationProperties {
return this.transaction;
}
/**
* No longer used.
* @return the connection String
* @deprecated connection to zookeeper is no longer necessary
*/
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@Deprecated
public String getZkConnectionString() {
return toConnectionString(this.zkNodes, this.defaultZkPort);
public String getKafkaConnectionString() {
// We need to do a check on certificate file locations to see if they are given as classpath resources.
// If that is the case, then we will move them to a file system location and use those as the certificate locations.
// This is due to a limitation in Kafka itself in which it doesn't allow reading certificate resources from the classpath.
// See this: https://issues.apache.org/jira/browse/KAFKA-7685
// and this: https://cwiki.apache.org/confluence/display/KAFKA/KIP-398%3A+Support+reading+trust+store+from+classpath
moveCertsToFileSystemIfNecessary();
return toConnectionString(this.brokers, this.defaultBrokerPort);
}
public String getKafkaConnectionString() {
return toConnectionString(this.brokers, this.defaultBrokerPort);
private void moveCertsToFileSystemIfNecessary() {
try {
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.keystore.location", fileSystemLocation);
}
}
catch (Exception e) {
throw new IllegalStateException(e);
}
}
private String moveCertToFileSystem(String classpathLocation, String fileSystemLocation) throws IOException {
File targetFile;
final String tempDir = System.getProperty("java.io.tmpdir");
Resource resource = new DefaultResourceLoader().getResource(classpathLocation);
if (StringUtils.hasText(fileSystemLocation)) {
final Path path = Paths.get(fileSystemLocation);
if (!Files.exists(path) || !Files.isDirectory(path) || !Files.isWritable(path)) {
logger.warn("The filesystem location to move the cert files (" + fileSystemLocation + ") " +
"is not found or a directory that is writable. The system temp folder (java.io.tmpdir) will be used instead.");
targetFile = new File(Paths.get(tempDir, resource.getFilename()).toString());
}
else {
// the given location is verified to be a writable directory.
targetFile = new File(Paths.get(fileSystemLocation, resource.getFilename()).toString());
}
}
else {
targetFile = new File(Paths.get(tempDir, resource.getFilename()).toString());
}
try (InputStream inputStream = resource.getInputStream()) {
Files.copy(inputStream, targetFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
}
return targetFile.getAbsolutePath();
}
public String getDefaultKafkaConnectionString() {
@@ -167,72 +215,6 @@ public class KafkaBinderConfigurationProperties {
return this.headers;
}
/**
* No longer used.
* @return the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateTimeWindow() {
return this.offsetUpdateTimeWindow;
}
/**
* No longer used.
* @return the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateCount() {
return this.offsetUpdateCount;
}
/**
* No longer used.
* @return the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateShutdownTimeout() {
return this.offsetUpdateShutdownTimeout;
}
/**
* Zookeeper nodes.
* @return the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public String[] getZkNodes() {
return this.zkNodes;
}
/**
* Zookeeper nodes.
* @param zkNodes the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}
/**
* Zookeeper port.
* @param defaultZkPort the port.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setDefaultZkPort(String defaultZkPort) {
this.defaultZkPort = defaultZkPort;
}
public String[] getBrokers() {
return this.brokers;
}
@@ -250,86 +232,8 @@ public class KafkaBinderConfigurationProperties {
}
/**
* No longer used.
* @param offsetUpdateTimeWindow the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateTimeWindow(int offsetUpdateTimeWindow) {
this.offsetUpdateTimeWindow = offsetUpdateTimeWindow;
}
/**
* No longer used.
* @param offsetUpdateCount the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateCount(int offsetUpdateCount) {
this.offsetUpdateCount = offsetUpdateCount;
}
/**
* No longer used.
* @param offsetUpdateShutdownTimeout the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateShutdownTimeout(int offsetUpdateShutdownTimeout) {
this.offsetUpdateShutdownTimeout = offsetUpdateShutdownTimeout;
}
/**
* Zookeeper session timeout.
* @return the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkSessionTimeout() {
return this.zkSessionTimeout;
}
/**
* Zookeeper session timeout.
* @param zkSessionTimeout the timout
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkSessionTimeout(int zkSessionTimeout) {
this.zkSessionTimeout = zkSessionTimeout;
}
/**
* Zookeeper connection timeout.
* @return the timout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkConnectionTimeout() {
return this.zkConnectionTimeout;
}
/**
* Zookeeper connection timeout.
* @param zkConnectionTimeout the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkConnectionTimeout(int zkConnectionTimeout) {
this.zkConnectionTimeout = zkConnectionTimeout;
}
/**
* Converts an array of host values to a comma-separated String.
* It will append the default port value, if not already specified.
*
* Converts an array of host values to a comma-separated String. It will append the
* default port value, if not already specified.
* @param hosts host string
* @param defaultPort port
* @return formatted connection string
@@ -347,36 +251,10 @@ public class KafkaBinderConfigurationProperties {
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
}
/**
* No longer used.
* @return the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getMaxWait() {
return this.maxWait;
}
/**
* No longer user.
* @param maxWait the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setMaxWait(int maxWait) {
this.maxWait = maxWait;
}
public String getRequiredAcks() {
return this.requiredAcks;
}
public void setRequiredAcks(int requiredAcks) {
this.requiredAcks = String.valueOf(requiredAcks);
}
public void setRequiredAcks(String requiredAcks) {
this.requiredAcks = requiredAcks;
}
@@ -389,28 +267,6 @@ public class KafkaBinderConfigurationProperties {
this.replicationFactor = replicationFactor;
}
/**
* No longer used.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getFetchSize() {
return this.fetchSize;
}
/**
* No longer used.
* @param fetchSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setFetchSize(int fetchSize) {
this.fetchSize = fetchSize;
}
public int getMinPartitionCount() {
return this.minPartitionCount;
}
@@ -427,28 +283,6 @@ public class KafkaBinderConfigurationProperties {
this.healthTimeout = healthTimeout;
}
/**
* No longer used.
* @return the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getQueueSize() {
return this.queueSize;
}
/**
* No longer used.
* @param queueSize the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setQueueSize(int queueSize) {
this.queueSize = queueSize;
}
public boolean isAutoCreateTopics() {
return this.autoCreateTopics;
}
@@ -457,6 +291,14 @@ public class KafkaBinderConfigurationProperties {
this.autoCreateTopics = autoCreateTopics;
}
public boolean isAutoAlterTopics() {
return autoAlterTopics;
}
public void setAutoAlterTopics(boolean autoAlterTopics) {
this.autoAlterTopics = autoAlterTopics;
}
public boolean isAutoAddPartitions() {
return this.autoAddPartitions;
}
@@ -465,30 +307,6 @@ public class KafkaBinderConfigurationProperties {
this.autoAddPartitions = autoAddPartitions;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public int getSocketBufferSize() {
return this.socketBufferSize;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @param socketBufferSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public void setSocketBufferSize(int socketBufferSize) {
this.socketBufferSize = socketBufferSize;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -525,15 +343,19 @@ public class KafkaBinderConfigurationProperties {
Map<String, Object> consumerConfiguration = new HashMap<>();
consumerConfiguration.putAll(this.kafkaProperties.buildConsumerProperties());
// Copy configured binder properties that apply to consumers
for (Map.Entry<String, String> configurationEntry : this.configuration.entrySet()) {
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ConsumerConfig.configNames().contains(configurationEntry.getKey())) {
consumerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
consumerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
consumerConfiguration.putAll(this.consumerProperties);
filterStreamManagedConfiguration(consumerConfiguration);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(consumerConfiguration, ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
return getConfigurationWithBootstrapServer(consumerConfiguration,
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
/**
@@ -546,31 +368,45 @@ public class KafkaBinderConfigurationProperties {
Map<String, Object> producerConfiguration = new HashMap<>();
producerConfiguration.putAll(this.kafkaProperties.buildProducerProperties());
// Copy configured binder properties that apply to producers
for (Map.Entry<String, String> configurationEntry : this.configuration.entrySet()) {
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ProducerConfig.configNames().contains(configurationEntry.getKey())) {
producerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
producerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
producerConfiguration.putAll(this.producerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(producerConfiguration, ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
return getConfigurationWithBootstrapServer(producerConfiguration,
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private Map<String, Object> getConfigurationWithBootstrapServer(Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
private void filterStreamManagedConfiguration(Map<String, Object> configuration) {
if (configuration.containsKey(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG)
&& configuration.get(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG).equals(true)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG) +
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG + "=true is not supported by the Kafka binder");
configuration.remove(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
else {
Object boostrapServersConfig = configuration.get(bootstrapServersConfig);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
if (configuration.containsKey(ConsumerConfig.GROUP_ID_CONFIG)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.GROUP_ID_CONFIG) +
"Use spring.cloud.stream.default.group or spring.cloud.stream.binding.<name>.group to specify " +
"the group instead of " + ConsumerConfig.GROUP_ID_CONFIG);
configuration.remove(ConsumerConfig.GROUP_ID_CONFIG);
}
}
private String constructIgnoredConfigMessage(String config) {
return String.format("Ignoring provided value(s) for '%s'. ", config);
}
private Map<String, Object> getConfigurationWithBootstrapServer(
Map<String, Object> configuration, String bootstrapServersConfig) {
final String kafkaConnectionString = getKafkaConnectionString();
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig)) ||
!kafkaConnectionString.equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, kafkaConnectionString);
}
return Collections.unmodifiableMap(configuration);
}
@@ -591,6 +427,30 @@ public class KafkaBinderConfigurationProperties {
this.headerMapperBeanName = headerMapperBeanName;
}
public Duration getAuthorizationExceptionRetryInterval() {
return authorizationExceptionRetryInterval;
}
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
this.authorizationExceptionRetryInterval = authorizationExceptionRetryInterval;
}
public boolean isConsiderDownWhenAnyPartitionHasNoLeader() {
return this.considerDownWhenAnyPartitionHasNoLeader;
}
public void setConsiderDownWhenAnyPartitionHasNoLeader(boolean considerDownWhenAnyPartitionHasNoLeader) {
this.considerDownWhenAnyPartitionHasNoLeader = considerDownWhenAnyPartitionHasNoLeader;
}
public String getCertificateStoreDirectory() {
return this.certificateStoreDirectory;
}
public void setCertificateStoreDirectory(String certificateStoreDirectory) {
this.certificateStoreDirectory = certificateStoreDirectory;
}
/**
* Domain class that models transaction capabilities in Kafka.
*/
@@ -615,9 +475,10 @@ public class KafkaBinderConfigurationProperties {
}
/**
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties}
* so that common and kafka-specific properties can be set for the transactional
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties} so
* that common and kafka-specific properties can be set for the transactional
* producer.
*
* @since 2.1
*/
public static class CombinedProducerProperties {
@@ -642,8 +503,10 @@ public class KafkaBinderConfigurationProperties {
return this.producerProperties.getPartitionSelectorExpression();
}
public void setPartitionSelectorExpression(Expression partitionSelectorExpression) {
this.producerProperties.setPartitionSelectorExpression(partitionSelectorExpression);
public void setPartitionSelectorExpression(
Expression partitionSelectorExpression) {
this.producerProperties
.setPartitionSelectorExpression(partitionSelectorExpression);
}
public @Min(value = 1, message = "Partition count should be greater than zero.") int getPartitionCount() {
@@ -662,11 +525,13 @@ public class KafkaBinderConfigurationProperties {
this.producerProperties.setRequiredGroups(requiredGroups);
}
public @AssertTrue(message = "Partition key expression and partition key extractor class properties are mutually exclusive.") boolean isValidPartitionKeyProperty() {
public @AssertTrue(message = "Partition key expression and partition key extractor class properties "
+ "are mutually exclusive.") boolean isValidPartitionKeyProperty() {
return this.producerProperties.isValidPartitionKeyProperty();
}
public @AssertTrue(message = "Partition selector class and partition selector expression properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
public @AssertTrue(message = "Partition selector class and partition selector expression "
+ "properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
return this.producerProperties.isValidPartitionSelectorProperty();
}
@@ -699,7 +564,8 @@ public class KafkaBinderConfigurationProperties {
}
public void setPartitionKeyExtractorName(String partitionKeyExtractorName) {
this.producerProperties.setPartitionKeyExtractorName(partitionKeyExtractorName);
this.producerProperties
.setPartitionKeyExtractorName(partitionKeyExtractorName);
}
public String getPartitionSelectorName() {
@@ -766,17 +632,18 @@ public class KafkaBinderConfigurationProperties {
this.kafkaProducerProperties.setConfiguration(configuration);
}
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
public KafkaTopicProperties getTopic() {
return this.kafkaProducerProperties.getTopic();
}
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
public void setTopic(KafkaTopicProperties topic) {
this.kafkaProducerProperties.setTopic(topic);
}
public KafkaProducerProperties getExtension() {
return this.kafkaProducerProperties;
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -26,10 +26,20 @@ import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
*/
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider {
/**
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();
/**
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
private KafkaProducerProperties producer = new KafkaProducerProperties();
/**
* @return {@link KafkaConsumerProperties}
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
public KafkaConsumerProperties getConsumer() {
return this.consumer;
}
@@ -38,6 +48,10 @@ public class KafkaBindingProperties implements BinderSpecificPropertiesProvider
this.consumer = consumer;
}
/**
* @return {@link KafkaProducerProperties}
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
public KafkaProducerProperties getProducer() {
return this.producer;
}
@@ -45,4 +59,5 @@ public class KafkaBindingProperties implements BinderSpecificPropertiesProvider
public void setProducer(KafkaProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016-2018 the original author or authors.
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.kafka.listener.ContainerProperties;
/**
* Extended consumer properties for Kafka binder.
@@ -27,6 +28,7 @@ import java.util.Map;
* @author Ilayaperumal Gopinathan
* @author Soby Chacko
* @author Gary Russell
* @author Aldo Sinanaj
*
* <p>
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
@@ -38,6 +40,7 @@ public class KafkaConsumerProperties {
* Enumeration for starting consumer offset.
*/
public enum StartOffset {
/**
* Starting from earliest offset.
*/
@@ -56,12 +59,14 @@ public class KafkaConsumerProperties {
public long getReferencePoint() {
return this.referencePoint;
}
}
/**
* Standard headers for the message.
*/
public enum StandardHeaders {
/**
* No headers.
*/
@@ -78,58 +83,197 @@ public class KafkaConsumerProperties {
* Indicating both ID and timestamp headers.
*/
both
}
/**
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
@Deprecated
private boolean ackEachRecord;
/**
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
private boolean autoRebalanceEnabled = true;
/**
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
@Deprecated
private boolean autoCommitOffset = true;
/**
* Controlling the container acknowledgement mode. This is the preferred way to control the ack mode on the
* container instead of the deprecated autoCommitOffset property.
*/
private ContainerProperties.AckMode ackMode;
/**
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*/
@Deprecated
private Boolean autoCommitOnError;
/**
* The starting offset for new groups. Allowed values: earliest and latest.
*/
private StartOffset startOffset;
/**
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
private boolean resetOffsets;
/**
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
private boolean enableDlq;
/**
* The name of the DLQ topic to receive the error messages.
*/
private String dlqName;
/**
* Number of partitions to use on the DLQ.
*/
private Integer dlqPartitions;
/**
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
/**
* @deprecated No longer used by the binder.
*/
@Deprecated
private int recoveryInterval = 5000;
/**
* List of trusted packages to provide the header mapper.
*/
private String[] trustedPackages;
/**
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
private StandardHeaders standardHeaders = StandardHeaders.none;
/**
* The name of a bean that implements RecordMessageConverter.
*/
private String converterBeanName;
/**
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
private long idleEventInterval = 30_000;
/**
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
private boolean destinationIsPattern;
/**
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Timeout used for polling in pollable consumers.
*/
private long pollTimeout = org.springframework.kafka.listener.ConsumerProperties.DEFAULT_POLL_TIMEOUT;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* @return if each record needs to be acknowledged.
*
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*
* @deprecated since 3.1 in favor of using {@link #ackMode}
*/
@Deprecated
public boolean isAckEachRecord() {
return this.ackEachRecord;
}
/**
* @param ackEachRecord
*
* @deprecated in favor of using {@link #ackMode}
*/
@Deprecated
public void setAckEachRecord(boolean ackEachRecord) {
this.ackEachRecord = ackEachRecord;
}
/**
* @return is autocommit offset enabled
*
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*
* @deprecated since 3.1 in favor of using {@link #ackMode}
*/
@Deprecated
public boolean isAutoCommitOffset() {
return this.autoCommitOffset;
}
/**
* @param autoCommitOffset
*
* @deprecated in favor of using {@link #ackMode}
*/
@Deprecated
public void setAutoCommitOffset(boolean autoCommitOffset) {
this.autoCommitOffset = autoCommitOffset;
}
/**
* @return Container's ack mode.
*/
public ContainerProperties.AckMode getAckMode() {
return this.ackMode;
}
public void setAckMode(ContainerProperties.AckMode ackMode) {
this.ackMode = ackMode;
}
/**
* @return start offset
*
* The starting offset for new groups. Allowed values: earliest and latest.
*/
public StartOffset getStartOffset() {
return this.startOffset;
}
@@ -138,6 +282,12 @@ public class KafkaConsumerProperties {
this.startOffset = startOffset;
}
/**
* @return if resetting offset is enabled
*
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
public boolean isResetOffsets() {
return this.resetOffsets;
}
@@ -146,6 +296,13 @@ public class KafkaConsumerProperties {
this.resetOffsets = resetOffsets;
}
/**
* @return is DLQ enabled.
*
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
public boolean isEnableDlq() {
return this.enableDlq;
}
@@ -154,10 +311,30 @@ public class KafkaConsumerProperties {
this.enableDlq = enableDlq;
}
/**
* @return is autocommit on error
*
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*
* @deprecated in favor of using an error handler and customize the container with that error handler.
*/
@Deprecated
public Boolean getAutoCommitOnError() {
return this.autoCommitOnError;
}
/**
*
* @param autoCommitOnError commit on error
*
* @deprecated in favor of using an error handler and customize the container with that error handler.
*/
@Deprecated
public void setAutoCommitOnError(Boolean autoCommitOnError) {
this.autoCommitOnError = autoCommitOnError;
}
@@ -182,6 +359,12 @@ public class KafkaConsumerProperties {
this.recoveryInterval = recoveryInterval;
}
/**
* @return is auto rebalance enabled
*
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
public boolean isAutoRebalanceEnabled() {
return this.autoRebalanceEnabled;
}
@@ -190,6 +373,12 @@ public class KafkaConsumerProperties {
this.autoRebalanceEnabled = autoRebalanceEnabled;
}
/**
* @return a map of configuration
*
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -198,6 +387,11 @@ public class KafkaConsumerProperties {
this.configuration = configuration;
}
/**
* @return dlq name
*
* The name of the DLQ topic to receive the error messages.
*/
public String getDlqName() {
return this.dlqName;
}
@@ -206,6 +400,24 @@ public class KafkaConsumerProperties {
this.dlqName = dlqName;
}
/**
* @return number of partitions on the DLQ topic
*
* Number of partitions to use on the DLQ.
*/
public Integer getDlqPartitions() {
return this.dlqPartitions;
}
public void setDlqPartitions(Integer dlqPartitions) {
this.dlqPartitions = dlqPartitions;
}
/**
* @return trusted packages
*
* List of trusted packages to provide the header mapper.
*/
public String[] getTrustedPackages() {
return this.trustedPackages;
}
@@ -214,6 +426,12 @@ public class KafkaConsumerProperties {
this.trustedPackages = trustedPackages;
}
/**
* @return dlq producer properties
*
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
public KafkaProducerProperties getDlqProducerProperties() {
return this.dlqProducerProperties;
}
@@ -221,6 +439,13 @@ public class KafkaConsumerProperties {
public void setDlqProducerProperties(KafkaProducerProperties dlqProducerProperties) {
this.dlqProducerProperties = dlqProducerProperties;
}
/**
* @return standard headers
*
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
@@ -229,6 +454,11 @@ public class KafkaConsumerProperties {
this.standardHeaders = standardHeaders;
}
/**
* @return converter bean name
*
* The name of a bean that implements RecordMessageConverter.
*/
public String getConverterBeanName() {
return this.converterBeanName;
}
@@ -237,6 +467,11 @@ public class KafkaConsumerProperties {
this.converterBeanName = converterBeanName;
}
/**
* @return idle event interval
*
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
public long getIdleEventInterval() {
return this.idleEventInterval;
}
@@ -245,6 +480,11 @@ public class KafkaConsumerProperties {
this.idleEventInterval = idleEventInterval;
}
/**
* @return is destination given through a pattern
*
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
public boolean isDestinationIsPattern() {
return this.destinationIsPattern;
}
@@ -253,12 +493,43 @@ public class KafkaConsumerProperties {
this.destinationIsPattern = destinationIsPattern;
}
public KafkaAdminProperties getAdmin() {
return this.admin;
/**
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
/**
* @return timeout in pollable consumers
*
* Timeout used for polling in pollable consumers.
*/
public long getPollTimeout() {
return this.pollTimeout;
}
public void setPollTimeout(long pollTimeout) {
this.pollTimeout = pollTimeout;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,12 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.AbstractExtendedBindingProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Kafka specific extended binding properties class that extends from {@link AbstractExtendedBindingProperties}.
* Kafka specific extended binding properties class that extends from
* {@link AbstractExtendedBindingProperties}.
*
* @author Marius Bogoevici
* @author Gary Russell
@@ -29,8 +32,8 @@ import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
* @author Oleg Zhurakousky
*/
@ConfigurationProperties("spring.cloud.stream.kafka")
public class KafkaExtendedBindingProperties
extends AbstractExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties, KafkaBindingProperties> {
public class KafkaExtendedBindingProperties extends
AbstractExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties, KafkaBindingProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.default";
@@ -39,8 +42,14 @@ public class KafkaExtendedBindingProperties
return DEFAULTS_PREFIX;
}
@Override
public Map<String, KafkaBindingProperties> getBindings() {
return this.doGetBindings();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaBindingProperties.class;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -29,25 +29,92 @@ import org.springframework.expression.Expression;
* @author Marius Bogoevici
* @author Henryk Konsek
* @author Gary Russell
* @author Aldo Sinanaj
*/
public class KafkaProducerProperties {
/**
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
private int bufferSize = 16384;
/**
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
private CompressionType compressionType = CompressionType.none;
/**
* Whether the producer is synchronous.
*/
private boolean sync;
/**
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
private Expression sendTimeoutExpression;
/**
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
private int batchTimeout;
/**
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
private Expression messageKeyExpression;
/**
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
private String[] headerPatterns;
/**
* Map with a key/value pair containing generic Kafka producer properties.
*/
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
private boolean useTopicHeader;
/**
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
private String recordMetadataChannel;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/*
* Timeout value in seconds for the duration to wait when closing the producer.
* If not set this defaults to 30 seconds.
*/
private int closeTimeout;
/**
* Set to true to disable transactions.
*/
private boolean allowNonTransactional;
/**
* @return buffer size
*
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
public int getBufferSize() {
return this.bufferSize;
}
@@ -56,6 +123,12 @@ public class KafkaProducerProperties {
this.bufferSize = bufferSize;
}
/**
* @return compression type {@link CompressionType}
*
* Set the compression.type producer property. Supported values are none, gzip, snappy, lz4 and zstd.
* See {@link CompressionType} for more details.
*/
@NotNull
public CompressionType getCompressionType() {
return this.compressionType;
@@ -65,6 +138,11 @@ public class KafkaProducerProperties {
this.compressionType = compressionType;
}
/**
* @return if synchronous sending is enabled
*
* Whether the producer is synchronous.
*/
public boolean isSync() {
return this.sync;
}
@@ -73,6 +151,25 @@ public class KafkaProducerProperties {
this.sync = sync;
}
/**
* @return timeout expression for send
*
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
public Expression getSendTimeoutExpression() {
return this.sendTimeoutExpression;
}
public void setSendTimeoutExpression(Expression sendTimeoutExpression) {
this.sendTimeoutExpression = sendTimeoutExpression;
}
/**
* @return batch timeout
*
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
public int getBatchTimeout() {
return this.batchTimeout;
}
@@ -81,6 +178,11 @@ public class KafkaProducerProperties {
this.batchTimeout = batchTimeout;
}
/**
* @return message key expression
*
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
public Expression getMessageKeyExpression() {
return this.messageKeyExpression;
}
@@ -89,6 +191,12 @@ public class KafkaProducerProperties {
this.messageKeyExpression = messageKeyExpression;
}
/**
* @return header patterns
*
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
public String[] getHeaderPatterns() {
return this.headerPatterns;
}
@@ -97,6 +205,11 @@ public class KafkaProducerProperties {
this.headerPatterns = headerPatterns;
}
/**
* @return map of configuration
*
* Map with a key/value pair containing generic Kafka producer properties.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -105,29 +218,110 @@ public class KafkaProducerProperties {
this.configuration = configuration;
}
public KafkaAdminProperties getAdmin() {
return this.admin;
/**
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
/**
* @return if using topic header
*
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
public boolean isUseTopicHeader() {
return this.useTopicHeader;
}
public void setUseTopicHeader(boolean useTopicHeader) {
this.useTopicHeader = useTopicHeader;
}
/**
* @return record metadata channel
*
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
public String getRecordMetadataChannel() {
return this.recordMetadataChannel;
}
public void setRecordMetadataChannel(String recordMetadataChannel) {
this.recordMetadataChannel = recordMetadataChannel;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
/*
* @return timeout in seconds for closing the producer
*/
public int getCloseTimeout() {
return this.closeTimeout;
}
public void setCloseTimeout(int closeTimeout) {
this.closeTimeout = closeTimeout;
}
public boolean isAllowNonTransactional() {
return this.allowNonTransactional;
}
public void setAllowNonTransactional(boolean allowNonTransactional) {
this.allowNonTransactional = allowNonTransactional;
}
/**
* Enumeration for compression types.
*/
public enum CompressionType {
/**
* No compression.
*/
none,
/**
* gzip based compression.
*/
gzip,
/**
* snappy based compression.
*/
snappy
snappy,
/**
* lz4 compression.
*/
lz4,
/**
* zstd compression.
*/
zstd,
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,20 +23,20 @@ import java.util.Map;
/**
* Properties for configuring topics.
*
* @author Gary Russell
* @since 2.0
* @author Aldo Sinanaj
* @since 2.2
*
*/
public class KafkaAdminProperties {
public class KafkaTopicProperties {
private Short replicationFactor;
private Map<Integer, List<Integer>> replicasAssignments = new HashMap<>();
private Map<String, String> configuration = new HashMap<>();
private Map<String, String> properties = new HashMap<>();
public Short getReplicationFactor() {
return this.replicationFactor;
return replicationFactor;
}
public void setReplicationFactor(Short replicationFactor) {
@@ -44,19 +44,19 @@ public class KafkaAdminProperties {
}
public Map<Integer, List<Integer>> getReplicasAssignments() {
return this.replicasAssignments;
return replicasAssignments;
}
public void setReplicasAssignments(Map<Integer, List<Integer>> replicasAssignments) {
this.replicasAssignments = replicasAssignments;
}
public Map<String, String> getConfiguration() {
return this.configuration;
public Map<String, String> getProperties() {
return properties;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,20 +18,27 @@ package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.AlterConfigOp;
import org.apache.kafka.clients.admin.AlterConfigsResult;
import org.apache.kafka.clients.admin.Config;
import org.apache.kafka.clients.admin.ConfigEntry;
import org.apache.kafka.clients.admin.CreatePartitionsResult;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.DescribeConfigsResult;
import org.apache.kafka.clients.admin.DescribeTopicsResult;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.clients.admin.NewPartitions;
@@ -39,6 +46,7 @@ import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.config.ConfigResource;
import org.apache.kafka.common.errors.TopicExistsException;
import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
@@ -47,10 +55,10 @@ import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaAdminProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaTopicProperties;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
@@ -73,14 +81,18 @@ import org.springframework.util.StringUtils;
* @author Ilayaperumal Gopinathan
* @author Simon Flandergan
* @author Oleg Zhurakousky
* @author Aldo Sinanaj
*/
public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>,
ExtendedProducerProperties<KafkaProducerProperties>>, InitializingBean {
public class KafkaTopicProvisioner implements
// @checkstyle:off
ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>>,
// @checkstyle:on
InitializingBean {
private static final Log logger = LogFactory.getLog(KafkaTopicProvisioner.class);
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private final Log logger = LogFactory.getLog(getClass());
private final KafkaBinderConfigurationProperties configurationProperties;
private final int operationTimeout = DEFAULT_OPERATION_TIMEOUT;
@@ -89,17 +101,24 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
private RetryOperations metadataRetryOperations;
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
/**
* Create an instance.
* @param kafkaBinderConfigurationProperties the binder configuration properties.
* @param kafkaProperties the boot Kafka properties used to build the
* {@link AdminClient}.
*/
public KafkaTopicProvisioner(
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaBinderConfigurationProperties);
}
/**
* Mutator for metadata retry operations.
*
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
@@ -107,7 +126,7 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
@Override
public void afterPropertiesSet() throws Exception {
public void afterPropertiesSet() {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
@@ -128,25 +147,35 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
public ProducerDestination provisionProducerDestination(final String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
if (logger.isInfoEnabled()) {
logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
createTopic(adminClient, name, properties.getPartitionCount(), false, properties.getExtension().getAdmin());
createTopic(adminClient, name, properties.getPartitionCount(), false,
properties.getExtension().getTopic());
int partitions = 0;
Map<String, TopicDescription> topicDescriptions = new HashMap<>();
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
Map<String, TopicDescription> topicDescriptions = null;
try {
topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception ex) {
throw new ProvisioningException("Problems encountered with partitions finding", ex);
}
TopicDescription topicDescription = topicDescriptions.get(name);
this.metadataRetryOperations.execute(context -> {
try {
if (logger.isDebugEnabled()) {
logger.debug("Attempting to retrieve the description for the topic: " + name);
}
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
topicDescriptions.putAll(all.get(this.operationTimeout, TimeUnit.SECONDS));
}
catch (Exception ex) {
throw new ProvisioningException("Problems encountered with partitions finding for: " + name, ex);
}
return null;
});
}
TopicDescription topicDescription = topicDescriptions.get(name);
if (topicDescription != null) {
partitions = topicDescription.partitions().size();
}
return new KafkaProducerDestination(name, partitions);
@@ -154,7 +183,8 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
@Override
public ConsumerDestination provisionConsumerDestination(final String name, final String group,
public ConsumerDestination provisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (!properties.isMultiplex()) {
return doProvisionConsumerDestination(name, group, properties);
@@ -168,14 +198,15 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
}
private ConsumerDestination doProvisionConsumerDestination(final String name, final String group,
private ConsumerDestination doProvisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (properties.getExtension().isDestinationIsPattern()) {
Assert.isTrue(!properties.getExtension().isEnableDlq(),
"enableDLQ is not allowed when listening to topic patterns");
if (this.logger.isDebugEnabled()) {
this.logger.debug("Listening to a topic pattern - " + name
if (logger.isDebugEnabled()) {
logger.debug("Listening to a topic pattern - " + name
+ " - no provisioning performed");
}
return new KafkaConsumerDestination(name);
@@ -190,22 +221,28 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
ConsumerDestination consumerDestination = new KafkaConsumerDestination(name);
try (AdminClient adminClient = createAdminClient()) {
createTopic(adminClient, name, partitionCount, properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getAdmin());
createTopic(adminClient, name, partitionCount,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
try {
Map<String, TopicDescription> topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
Map<String, TopicDescription> topicDescriptions = all
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(name);
int partitions = topicDescription.partitions().size();
consumerDestination = createDlqIfNeedBe(adminClient, name, group, properties, anonymous, partitions);
consumerDestination = createDlqIfNeedBe(adminClient, name, group,
properties, anonymous, partitions);
if (consumerDestination == null) {
consumerDestination = new KafkaConsumerDestination(name, partitions);
consumerDestination = new KafkaConsumerDestination(name,
partitions);
}
}
catch (Exception ex) {
throw new ProvisioningException("provisioning exception", ex);
throw new ProvisioningException("Provisioning exception encountered for " + name, ex);
}
}
}
@@ -217,22 +254,24 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
/**
* In general, binder properties supersede boot kafka properties.
* The one exception is the bootstrap servers. In that case, we should only override
* the boot properties if (there is a binder property AND it is a non-default value)
* OR (if there is no boot property); this is needed because the binder property
* never returns a null value.
* In general, binder properties supersede boot kafka properties. The one exception is
* the bootstrap servers. In that case, we should only override the boot properties if
* (there is a binder property AND it is a non-default value) OR (if there is no boot
* property); this is needed because the binder property never returns a null value.
* @param adminProps the admin properties to normalize.
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps, KafkaProperties bootProps,
KafkaBinderConfigurationProperties binderProps) {
public static void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
if (ObjectUtils.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConnectionString);
if (ObjectUtils
.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString
.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
// Now override any boot values with binder values
Map<String, String> binderProperties = binderProps.getConfiguration();
@@ -244,29 +283,36 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && this.logger.isDebugEnabled()) {
this.logger.debug("Overrode boot property: [" + key + "], from: [" + replaced + "] to: [" + value + "]");
if (replaced != null && KafkaTopicProvisioner.logger.isDebugEnabled()) {
KafkaTopicProvisioner.logger.debug("Overrode boot property: [" + key + "], from: ["
+ replaced + "] to: [" + value + "]");
}
}
});
}
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name, String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName()) ?
properties.getExtension().getDlqName() : "error." + name + "." + group;
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
? properties.getExtension().getDlqName()
: "error." + name + "." + group;
int dlqPartitions = properties.getExtension().getDlqPartitions() == null
? partitions
: properties.getExtension().getDlqPartitions();
try {
createTopicAndPartitions(adminClient, dlqTopic, partitions,
properties.getExtension().isAutoRebalanceEnabled(), properties.getExtension().getAdmin());
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
throw new ProvisioningException("Provisioning exception encountered for " + name, throwable);
}
}
return new KafkaConsumerDestination(name, partitions, dlqTopic);
@@ -274,32 +320,37 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
return null;
}
private void createTopic(AdminClient adminClient, String name, int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaAdminProperties properties) {
private void createTopic(AdminClient adminClient, String name, int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaTopicProperties properties) {
try {
createTopicIfNecessary(adminClient, name, partitionCount, tolerateLowerPartitionsOnBroker, properties);
createTopicIfNecessary(adminClient, name, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
//TODO: Remove catching Throwable. See this thread: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
// TODO: Remove catching Throwable. See this thread:
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
//TODO: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
throw new ProvisioningException("Provisioning exception", throwable);
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
throw new ProvisioningException("Provisioning exception encountered for " + name, throwable);
}
}
}
private void createTopicIfNecessary(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties properties) throws Throwable {
private void createTopicIfNecessary(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties properties) throws Throwable {
if (this.configurationProperties.isAutoCreateTopics()) {
createTopicAndPartitions(adminClient, topicName, partitionCount, tolerateLowerPartitionsOnBroker,
properties);
createTopicAndPartitions(adminClient, topicName, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
else {
this.logger.info("Auto creation of topics is disabled.");
logger.info("Auto creation of topics is disabled.");
}
}
@@ -309,84 +360,108 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
* @param adminClient kafka admin client
* @param topicName topic name
* @param partitionCount partition count
* @param tolerateLowerPartitionsOnBroker whether lower partitions count on broker is tolerated ot not
* @param adminProperties kafka admin properties
* @param tolerateLowerPartitionsOnBroker whether lower partitions count on broker is
* tolerated ot not
* @param topicProperties kafka topic properties
* @throws Throwable from topic creation
*/
private void createTopicAndPartitions(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties adminProperties) throws Throwable {
private void createTopicAndPartitions(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties topicProperties) throws Throwable {
ListTopicsResult listTopicsResult = adminClient.listTopics();
KafkaFuture<Set<String>> namesFutures = listTopicsResult.names();
Set<String> names = namesFutures.get(this.operationTimeout, TimeUnit.SECONDS);
if (names.contains(topicName)) {
//check if topic.properties are different from Topic Configuration in Kafka
if (this.configurationProperties.isAutoAlterTopics()) {
alterTopicConfigsIfNecessary(adminClient, topicName, topicProperties);
}
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties.isAutoAddPartitions()
? Math.max(this.configurationProperties.getMinPartitionCount(), partitionCount)
int effectivePartitionCount = this.configurationProperties
.isAutoAddPartitions()
? Math.max(
this.configurationProperties.getMinPartitionCount(),
partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture.get(this.operationTimeout, TimeUnit.SECONDS);
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(topicName);
int partitionSize = topicDescription.partitions().size();
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
CreatePartitionsResult partitions = adminClient.createPartitions(
Collections.singletonMap(topicName, NewPartitions.increaseTo(effectivePartitionCount)));
CreatePartitionsResult partitions = adminClient
.createPartitions(Collections.singletonMap(topicName,
NewPartitions.increaseTo(effectivePartitionCount)));
partitions.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (effectivePartitionCount - partitionSize) + " idle consumers");
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (effectivePartitionCount - partitionSize)
+ " idle consumers");
}
else {
throw new ProvisioningException("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "Consider either increasing the partition count of the topic or enabling " +
"`autoAddPartitions`");
throw new ProvisioningException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead."
+ "Consider either increasing the partition count of the topic or enabling "
+ "`autoAddPartitions`");
}
}
}
else {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(this.configurationProperties.getMinPartitionCount(),
partitionCount);
final int effectivePartitionCount = Math.max(
this.configurationProperties.getMinPartitionCount(), partitionCount);
this.metadataRetryOperations.execute((context) -> {
NewTopic newTopic;
Map<Integer, List<Integer>> replicasAssignments = adminProperties.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName, adminProperties.getReplicasAssignments());
Map<Integer, List<Integer>> replicasAssignments = topicProperties
.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName,
topicProperties.getReplicasAssignments());
}
else {
newTopic = new NewTopic(topicName, effectivePartitionCount,
adminProperties.getReplicationFactor() != null
? adminProperties.getReplicationFactor()
: this.configurationProperties.getReplicationFactor());
topicProperties.getReplicationFactor() != null
? topicProperties.getReplicationFactor()
: this.configurationProperties
.getReplicationFactor());
}
if (adminProperties.getConfiguration().size() > 0) {
newTopic.configs(adminProperties.getConfiguration());
if (topicProperties.getProperties().size() > 0) {
newTopic.configs(topicProperties.getProperties());
}
CreateTopicsResult createTopicsResult = adminClient.createTopics(Collections.singletonList(newTopic));
CreateTopicsResult createTopicsResult = adminClient
.createTopics(Collections.singletonList(newTopic));
try {
createTopicsResult.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception ex) {
if (ex instanceof ExecutionException) {
if (ex.getCause() instanceof TopicExistsException) {
if (this.logger.isWarnEnabled()) {
this.logger.warn("Attempt to create topic: " + topicName + ". Topic already exists.");
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName
+ ". Topic already exists.");
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
@@ -395,59 +470,116 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
}
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable,
final String topicName) {
try {
return this.metadataRetryOperations
.execute((context) -> {
Collection<PartitionInfo> partitions = Collections.emptyList();
private void alterTopicConfigsIfNecessary(AdminClient adminClient,
String topicName,
KafkaTopicProperties topicProperties)
throws InterruptedException, ExecutionException, java.util.concurrent.TimeoutException {
ConfigResource topicConfigResource = new ConfigResource(ConfigResource.Type.TOPIC, topicName);
DescribeConfigsResult describeConfigsResult = adminClient
.describeConfigs(Collections.singletonList(topicConfigResource));
KafkaFuture<Map<ConfigResource, Config>> topicConfigurationFuture = describeConfigsResult.all();
Map<ConfigResource, Config> topicConfigMap = topicConfigurationFuture
.get(this.operationTimeout, TimeUnit.SECONDS);
Config config = topicConfigMap.get(topicConfigResource);
final List<AlterConfigOp> updatedConfigEntries = topicProperties.getProperties().entrySet().stream()
.filter(propertiesEntry -> {
// Property is new and should be added
if (config.get(propertiesEntry.getKey()) == null) {
return true;
}
else {
// Property changed and should be updated
return !config.get(propertiesEntry.getKey()).value().equals(propertiesEntry.getValue());
}
try {
//This call may return null or throw an exception.
partitions = callable.call();
})
.map(propertyEntry -> new ConfigEntry(propertyEntry.getKey(), propertyEntry.getValue()))
.map(configEntry -> new AlterConfigOp(configEntry, AlterConfigOp.OpType.SET))
.collect(Collectors.toList());
if (!updatedConfigEntries.isEmpty()) {
if (logger.isDebugEnabled()) {
logger.debug("Attempting to alter configs " + updatedConfigEntries + " for the topic:" + topicName);
}
Map<ConfigResource, Collection<AlterConfigOp>> alterConfigForTopics = new HashMap<>();
alterConfigForTopics.put(topicConfigResource, updatedConfigEntries);
AlterConfigsResult alterConfigsResult = adminClient.incrementalAlterConfigs(alterConfigForTopics);
alterConfigsResult.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
}
/**
* Check that the topic has the expected number of partitions and return the partition information.
* @param partitionCount the expected count.
* @param tolerateLowerPartitionsOnBroker if false, throw an exception if there are not enough partitions.
* @param callable a Callable that will provide the partition information.
* @param topicName the topic./
* @return the partition information.
*/
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable, final String topicName) {
try {
return this.metadataRetryOperations.execute((context) -> {
Collection<PartitionInfo> partitions = Collections.emptyList();
try {
// This call may return null or throw an exception.
partitions = callable.call();
}
catch (Exception ex) {
// The above call can potentially throw exceptions such as timeout. If
// we can determine
// that the exception was due to an unknown topic on the broker, just
// simply rethrow that.
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
logger.error("Failed to obtain partition information", ex);
}
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
if (CollectionUtils.isEmpty(partitions)) {
try (AdminClient adminClient = AdminClient
.create(this.adminClientProperties)) {
final DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
describeTopicsResult.all().get();
}
catch (ExecutionException ex) {
if (ex.getCause() instanceof UnknownTopicOrPartitionException) {
throw (UnknownTopicOrPartitionException) ex.getCause();
}
catch (Exception ex) {
//The above call can potentially throw exceptions such as timeout. If we can determine
//that the exception was due to an unknown topic on the broker, just simply rethrow that.
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
else {
logger.warn("No partitions have been retrieved for the topic "
+ "(" + topicName
+ "). This will affect the health check.");
}
if (CollectionUtils.isEmpty(partitions)) {
final AdminClient adminClient = AdminClient.create(this.adminClientProperties);
final DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(topicName));
try {
describeTopicsResult.all().get();
}
catch (ExecutionException ex) {
if (ex.getCause() instanceof UnknownTopicOrPartitionException) {
throw (UnknownTopicOrPartitionException)ex.getCause();
} else {
logger.warn("No partitions have been retrieved for the topic (" + topicName + "). This will affect the health check.");
}
}
}
// do a sanity check on the partition set
int partitionSize = partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ") + "been found instead");
}
}
return partitions;
});
}
}
// do a sanity check on the partition set
int partitionSize = CollectionUtils.isEmpty(partitions) ? 0 : partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead");
}
}
return partitions;
});
}
catch (Exception ex) {
this.logger.error("Cannot initialize Binder", ex);
logger.error("Cannot initialize Binder", ex);
throw new BinderException("Cannot initialize binder:", ex);
}
}
@@ -475,11 +607,10 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
@Override
public String toString() {
return "KafkaProducerDestination{" +
"producerDestinationName='" + producerDestinationName + '\'' +
", partitions=" + partitions +
'}';
return "KafkaProducerDestination{" + "producerDestinationName='"
+ producerDestinationName + '\'' + ", partitions=" + partitions + '}';
}
}
private static final class KafkaConsumerDestination implements ConsumerDestination {
@@ -498,7 +629,8 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
this(consumerDestinationName, partitions, null);
}
KafkaConsumerDestination(String consumerDestinationName, Integer partitions, String dlqName) {
KafkaConsumerDestination(String consumerDestinationName, Integer partitions,
String dlqName) {
this.consumerDestinationName = consumerDestinationName;
this.partitions = partitions;
this.dlqName = dlqName;
@@ -511,11 +643,11 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
@Override
public String toString() {
return "KafkaConsumerDestination{" +
"consumerDestinationName='" + consumerDestinationName + '\'' +
", partitions=" + partitions +
", dlqName='" + dlqName + '\'' +
'}';
return "KafkaConsumerDestination{" + "consumerDestinationName='"
+ consumerDestinationName + '\'' + ", partitions=" + partitions
+ ", dlqName='" + dlqName + '\'' + '}';
}
}
}

View File

@@ -0,0 +1,35 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import java.util.function.BiFunction;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* A {@link BiFunction} extension for defining DLQ destination resolvers.
*
* The BiFunction takes the {@link ConsumerRecord} and the exception as inputs
* and returns a topic name as the DLQ.
*
* @author Soby Chacko
* @since 3.0.9
*/
@FunctionalInterface
public interface DlqDestinationResolver extends BiFunction<ConsumerRecord<?, ?>, Exception, String> {
}

View File

@@ -0,0 +1,76 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import org.apache.commons.logging.Log;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.lang.Nullable;
/**
* A TriFunction that takes a consumer group, consumer record, and throwable and returns
* which partition to publish to the dead letter topic. Returning {@code null} means Kafka
* will choose the partition.
*
* @author Gary Russell
* @since 3.0
*
*/
@FunctionalInterface
public interface DlqPartitionFunction {
/**
* Returns the same partition as the original recor.
*/
DlqPartitionFunction ORIGINAL_PARTITION = (group, rec, ex) -> rec.partition();
/**
* Returns 0.
*/
DlqPartitionFunction PARTITION_ZERO = (group, rec, ex) -> 0;
/**
* Apply the function.
* @param group the consumer group.
* @param record the consumer record.
* @param throwable the exception.
* @return the DLQ partition, or null.
*/
@Nullable
Integer apply(String group, ConsumerRecord<?, ?> record, Throwable throwable);
/**
* Determine the fallback function to use based on the dlq partition count if no
* {@link DlqPartitionFunction} bean is provided.
* @param dlqPartitions the partition count.
* @param logger the logger.
* @return the fallback.
*/
static DlqPartitionFunction determineFallbackFunction(@Nullable Integer dlqPartitions, Log logger) {
if (dlqPartitions == null) {
return ORIGINAL_PARTITION;
}
else if (dlqPartitions > 1) {
logger.error("'dlqPartitions' is > 1 but a custom DlqPartitionFunction bean is not provided");
return ORIGINAL_PARTITION;
}
else {
return PARTITION_ZERO;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -30,20 +30,19 @@ public final class KafkaTopicUtils {
}
/**
* Validate topic name.
* Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
*
* Validate topic name. Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
* @param topicName name of the topic
*/
public static void validateTopicName(String topicName) {
try {
byte[] utf8 = topicName.getBytes("UTF-8");
for (byte b : utf8) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z') || (b >= '0') && (b <= '9') || (b == '.')
|| (b == '-') || (b == '_'))) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z')
|| (b >= '0') && (b <= '9') || (b == '.') || (b == '-')
|| (b == '_'))) {
throw new IllegalArgumentException(
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '" + topicName
+ "'");
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '"
+ topicName + "'");
}
}
}
@@ -51,4 +50,5 @@ public final class KafkaTopicUtils {
throw new AssertionError(ex); // Can't happen
}
}
}

View File

@@ -0,0 +1,145 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.assertj.core.util.Files;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaBinderConfigurationPropertiesTest {
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setGroupId("group1");
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setEnableAutoCommit(true);
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConsumerProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConsumerProps() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void testCertificateFilesAreConvertedToAbsolutePathsFromClassPathResources() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("ssl.truststore.location"))
.isEqualTo(Paths.get(System.getProperty("java.io.tmpdir"), "testclient.truststore").toString());
assertThat(configuration.get("ssl.keystore.location"))
.isEqualTo(Paths.get(System.getProperty("java.io.tmpdir"), "testclient.keystore").toString());
}
@Test
public void testCertificateFilesAreConvertedToGivenAbsolutePathsFromClassPathResources() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.setCertificateStoreDirectory("target");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("ssl.truststore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.truststore").toString());
assertThat(configuration.get("ssl.keystore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -35,7 +35,6 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
/**
* @author Gary Russell
* @since 2.0
@@ -47,18 +46,27 @@ public class KafkaTopicProvisionerTests {
@Test
public void bootPropertiesOverriddenExceptServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:1234"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:9092");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@@ -66,33 +74,44 @@ public class KafkaTopicProvisionerTests {
@Test
public void bootPropertiesOverriddenIncludingServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:9092"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:1234");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@Test
public void brokersInvalid() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, "localhost:1234");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
"localhost:1234");
try {
new KafkaTopicProvisioner(binderConfig, bootConfig);
fail("Expected illegal state");
}
catch (IllegalStateException e) {
assertThat(e.getMessage())
.isEqualTo("Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
assertThat(e.getMessage()).isEqualTo(
"Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
}

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC4</version>
<version>3.1.0</version>
</parent>
<properties>
@@ -22,6 +22,11 @@
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
@@ -49,13 +54,6 @@
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
</dependency>
<!-- Added back since Kafka still depends on it, but it has been removed by Boot due to EOL -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
@@ -66,24 +64,19 @@
<artifactId>spring-boot-autoconfigure-processor</artifactId>
<optional>true</optional>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
<artifactId>kafka_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<scope>test</scope>
</dependency>
<dependency>
@@ -116,4 +109,4 @@
</plugin>
</plugins>
</build>
</project>
</project>

View File

@@ -0,0 +1,579 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.regex.Pattern;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.TimestampExtractor;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 3.0.0
*/
public abstract class AbstractKafkaStreamsBinderProcessor implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(AbstractKafkaStreamsBinderProcessor.class);
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final CleanupConfig cleanupConfig;
private final KeyValueSerdeResolver keyValueSerdeResolver;
protected ConfigurableApplicationContext applicationContext;
public AbstractKafkaStreamsBinderProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.cleanupConfig = cleanupConfig;
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected Topology.AutoOffsetReset getAutoOffsetReset(String inboundName, KafkaStreamsConsumerProperties extendedConsumerProperties) {
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
AbstractKafkaStreamsBinderProcessor.LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
return autoOffsetReset;
}
@SuppressWarnings("unchecked")
protected void handleKTableGlobalKTableInputs(Object[] arguments, int index, String input, Class<?> parameterType, Object targetBean,
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
}
@SuppressWarnings({ "unchecked" })
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
ApplicationContext applicationContext, String inboundName,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanCustomizer customizer,
ConfigurableEnvironment environment, BindingProperties bindingProperties) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
// Use a copy because the global configuration will be shared by multiple processors.
Map<String, Object> streamConfiguration = new HashMap<>(streamConfigGlobalProperties);
if (kafkaStreamsBinderConfigurationProperties != null) {
final Map<String, KafkaStreamsBinderConfigurationProperties.Functions> functionConfigMap = kafkaStreamsBinderConfigurationProperties.getFunctions();
if (!CollectionUtils.isEmpty(functionConfigMap)) {
final KafkaStreamsBinderConfigurationProperties.Functions functionConfig = functionConfigMap.get(beanNamePostPrefix);
if (functionConfig != null) {
final Map<String, String> functionSpecificConfig = functionConfig.getConfiguration();
if (!CollectionUtils.isEmpty(functionSpecificConfig)) {
streamConfiguration.putAll(functionSpecificConfig);
}
String applicationId = functionConfig.getApplicationId();
if (!StringUtils.isEmpty(applicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
}
}
}
final MutablePropertySources propertySources = environment.getPropertySources();
if (!StringUtils.isEmpty(bindingProperties.getBinder())) {
final KafkaStreamsBinderConfigurationProperties multiBinderKafkaStreamsBinderConfigurationProperties =
applicationContext.getBean(bindingProperties.getBinder() + "-KafkaStreamsBinderConfigurationProperties", KafkaStreamsBinderConfigurationProperties.class);
String connectionString = multiBinderKafkaStreamsBinderConfigurationProperties.getKafkaConnectionString();
if (StringUtils.isEmpty(connectionString)) {
connectionString = (String) propertySources.get(bindingProperties.getBinder() + "-kafkaStreamsBinderEnv").getProperty("spring.cloud.stream.kafka.binder.brokers");
}
streamConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, connectionString);
String binderProvidedApplicationId = multiBinderKafkaStreamsBinderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
SendToDlqAndContinue sendToDlqAndContinue = applicationContext.getBean(SendToDlqAndContinue.class);
streamConfiguration.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration())) {
streamConfiguration.putAll(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration());
}
if (!streamConfiguration.containsKey(StreamsConfig.REPLICATION_FACTOR_CONFIG)) {
streamConfiguration.put(StreamsConfig.REPLICATION_FACTOR_CONFIG,
(int) multiBinderKafkaStreamsBinderConfigurationProperties.getReplicationFactor());
}
}
//this is only used primarily for StreamListener based processors. Although in theory, functions can use it,
//it is ideal for functions to use the approach used in the above if statement by using a property like
//spring.cloud.stream.kafka.streams.binder.functions.process.configuration.num.threads (assuming that process is the function name).
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
Map<String, String> bindingConfig = extendedConsumerProperties.getConfiguration();
Assert.state(!bindingConfig.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG),
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
streamConfigGlobalProperties.putAll(bindingConfig);
streamConfiguration
.putAll(extendedConsumerProperties.getConfiguration());
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
// We provide this for backward compatibility with StreamListener based processors.
// For function based processors see the approach used above
// (i.e. use a property like spring.cloud.stream.kafka.streams.binder.functions.process.applicationId).
if (StringUtils.hasText(bindingLevelApplicationId)) {
streamConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
bindingLevelApplicationId);
}
//If the application id is not set by any mechanism, then generate it.
streamConfiguration.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
k -> {
String generatedApplicationID = beanNamePostPrefix + "-applicationId";
LOG.info("Binder Generated Kafka Streams Application ID: " + generatedApplicationID);
LOG.info("Use the binder generated application ID only for development and testing. ");
LOG.info("For production deployments, please consider explicitly setting an application ID using a configuration property.");
LOG.info("The generated applicationID is static and will be preserved over application restarts.");
return generatedApplicationID;
});
handleConcurrency(applicationContext, inboundName, streamConfiguration);
// Override deserialization exception handlers per binding
final DeserializationExceptionHandler deserializationExceptionHandler =
extendedConsumerProperties.getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndFail) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndContinue) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
streamConfiguration.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
streamConfiguration.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER,
applicationContext.getBean(SendToDlqAndContinue.class));
}
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfiguration);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilderFactoryBean.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilderFactoryBean.getClass(),
() -> streamsBuilderFactoryBean)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
extendedConsumerProperties.setApplicationId((String) streamConfiguration.get(StreamsConfig.APPLICATION_ID_CONFIG));
final StreamsBuilderFactoryBean streamsBuilderFactoryBeanFromContext = applicationContext.getBean(
"&stream-builder-" + beanNamePostPrefix, StreamsBuilderFactoryBean.class);
//At this point, the StreamsBuilderFactoryBean is created. If the users call, getObject()
//in the customizer, that should grant access to the StreamsBuilder.
if (customizer != null) {
customizer.configure(streamsBuilderFactoryBean);
}
return streamsBuilderFactoryBeanFromContext;
}
private void handleConcurrency(ApplicationContext applicationContext, String inboundName,
Map<String, Object> streamConfiguration) {
// This rebinding is necessary to capture the concurrency explicitly set by the application.
// This is added to fix this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/899
org.springframework.boot.context.properties.bind.Binder explicitConcurrencyResolver =
new org.springframework.boot.context.properties.bind.Binder(ConfigurationPropertySources.get(applicationContext.getEnvironment()),
new PropertySourcesPlaceholdersResolver(applicationContext.getEnvironment()),
IntegrationUtils.getConversionService(this.applicationContext.getBeanFactory()), null);
boolean[] concurrencyExplicitlyProvided = new boolean[] {false};
BindHandler handler = new BindHandler() {
@Override
public Object onSuccess(ConfigurationPropertyName name, Bindable<?> target,
BindContext context, Object result) {
if (!concurrencyExplicitlyProvided[0]) {
concurrencyExplicitlyProvided[0] = name.getLastElement(ConfigurationPropertyName.Form.UNIFORM)
.equals("concurrency") &&
ConfigurationPropertyName.of("spring.cloud.stream.bindings." + inboundName + ".consumer").isAncestorOf(name);
}
return result;
}
};
//Re-bind spring.cloud.stream properties to check if the application explicitly provided concurrency.
try {
explicitConcurrencyResolver.bind("spring.cloud.stream",
Bindable.ofInstance(new BindingServiceProperties()), handler);
}
catch (Exception e) {
// Ignore this exception
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
// Concurrency will be mapped to num.stream.threads.
// This conditional also takes into account explicit concurrency settings left at the default value of 1
// by the application to address concurrency behavior in applications with multiple processors.
// See this GH issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844
if (concurrency >= 1 && concurrencyExplicitlyProvided[0]) {
streamConfiguration.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
}
protected Serde<?> getValueSerde(String inboundName, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, ResolvableType resolvableType) {
if (bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding()) {
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
return this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), kafkaStreamsConsumerProperties, resolvableType);
}
else {
return Serdes.ByteArray();
}
}
@SuppressWarnings({"rawtypes", "unchecked"})
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
KStream<?, ?> stream;
if (this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName).isDestinationIsPattern()) {
final Pattern pattern = Pattern.compile(this.bindingServiceProperties.getBindingDestination(inboundName));
stream = streamsBuilder.stream(pattern);
}
else {
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
new Serdes.BytesSerde() : valueSerde;
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
consumed);
}
//Check to see if event type based routing is enabled.
//See this issue for more context: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
AtomicBoolean matched = new AtomicBoolean();
// Processor to retrieve the header value.
stream.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object key, Object value) {
final Headers headers = this.context.headers();
final Iterable<Header> eventTypeHeader = headers.headers(kafkaStreamsConsumerProperties.getEventTypeHeaderKey());
if (eventTypeHeader != null && eventTypeHeader.iterator().hasNext()) {
String eventTypeFromHeader = new String(eventTypeHeader.iterator().next().value());
final String[] eventTypesFromBinding = StringUtils.commaDelimitedListToStringArray(kafkaStreamsConsumerProperties.getEventTypes());
for (String eventTypeFromBinding : eventTypesFromBinding) {
if (eventTypeFromHeader.equals(eventTypeFromBinding)) {
matched.set(true);
break;
}
}
}
}
@Override
public void close() {
}
});
// Branching based on event type match.
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
// Deserialize if we have a branch from above.
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(null, ((Bytes) value).get()));
return getkStream(bindingProperties, deserializedKStream, nativeDecoding);
}
return getkStream(bindingProperties, stream, nativeDecoding);
}
private KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
if (!nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType)) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
}
return stream;
}
@SuppressWarnings("rawtypes")
private void addStateStoreBeans(StreamsBuilder streamsBuilder) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
consumed, getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
consumed,
getMaterialized(storeName, k, v));
}
private GlobalKTable<?, ?> getGlobalKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.globalTable(bindingDestination,
consumed);
}
private KTable<?, ?> getKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.table(bindingDestination,
consumed);
}
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
Serde<K> keySerde, Serde<V> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
TimestampExtractor timestampExtractor = null;
if (!StringUtils.isEmpty(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName())) {
timestampExtractor = applicationContext.getBean(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName(),
TimestampExtractor.class);
}
final Consumed<K, V> consumed = Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset);
if (timestampExtractor != null) {
consumed.withTimestampExtractor(timestampExtractor);
}
return consumed;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* Enumeration for various {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} types.
*
* @author Soby Chacko
* @since 3.0.0
*/
public enum DeserializationExceptionHandler {
/**
* Deserialization error handler with log and continue.
* See {@link org.apache.kafka.streams.errors.LogAndContinueExceptionHandler}
*/
logAndContinue,
/**
* Deserialization error handler with log and fail.
* See {@link org.apache.kafka.streams.errors.LogAndFailExceptionHandler}
*/
logAndFail,
/**
* Deserialization error handler with DLQ send.
* See {@link org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler}
*/
sendToDlq
}

View File

@@ -0,0 +1,72 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.context.properties.ConfigurationPropertiesBindHandlerAdvisor;
import org.springframework.boot.context.properties.bind.AbstractBindHandler;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
/**
* {@link ConfigurationPropertiesBindHandlerAdvisor} to detect nativeEncoding/Decoding settings
* provided by the application explicitly.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class EncodingDecodingBindAdviceHandler implements ConfigurationPropertiesBindHandlerAdvisor {
private boolean encodingSettingProvided;
private boolean decodingSettingProvided;
public boolean isDecodingSettingProvided() {
return decodingSettingProvided;
}
public boolean isEncodingSettingProvided() {
return this.encodingSettingProvided;
}
@Override
public BindHandler apply(BindHandler bindHandler) {
BindHandler handler = new AbstractBindHandler(bindHandler) {
@Override
public <T> Bindable<T> onStart(ConfigurationPropertyName name,
Bindable<T> target, BindContext context) {
final String configName = name.toString();
if (configName.contains("use") && configName.contains("native") &&
(configName.contains("encoding") || configName.contains("decoding"))) {
BindResult<T> result = context.getBinder().bind(name, target);
if (result.isBound()) {
if (configName.contains("encoding")) {
EncodingDecodingBindAdviceHandler.this.encodingSettingProvided = true;
}
else {
EncodingDecodingBindAdviceHandler.this.decodingSettingProvided = true;
}
return target.withExistingValue(result.get());
}
}
return bindHandler.onStart(name, target, context);
}
};
return handler;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -32,63 +30,82 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
* An {@link AbstractBinder} implementation for {@link GlobalKTable}.
*
* Provides only consumer binding for the bound {@link GlobalKTable}.
* Output bindings are not allowed on this binder.
* <p>
* Provides only consumer binding for the bound {@link GlobalKTable}. Output bindings are
* not allowed on this binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBinder extends
// @checkstyle:off
AbstractBinder<GlobalKTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
public GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
// @checkstyle:on
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name, String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name,
String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, this.kafkaStreamsDlqDispatchers);
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name, GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for GlobalKTable");
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name,
GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException(
"No producer level binding is allowed for GlobalKTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
throw new UnsupportedOperationException("No producer binding is allowed and therefore no properties");
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
throw new UnsupportedOperationException(
"No producer binding is allowed and therefore no properties");
}
@Override
@@ -98,7 +115,13 @@ public class GlobalKTableBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -21,13 +21,15 @@ import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for GlobalKTable binder.
@@ -36,32 +38,50 @@ import org.springframework.context.annotation.Configuration;
* @since 2.1.0
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class GlobalKTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
return new GlobalKTableBinder(binderConfigurationProperties, kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
public GlobalKTableBinder GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,74 +23,112 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link GlobalKTable}
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for
* {@link GlobalKTable}
*
* Input bindings are only created as output bindings on GlobalKTable are not allowed.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactory<GlobalKTable> {
public class GlobalKTableBoundElementFactory
extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper = new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class, GlobalKTable.class);
// @checkstyle:on
ProxyFactory proxyFactory = new ProxyFactory(
GlobalKTableBoundElementFactory.GlobalKTableWrapper.class,
GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
final GlobalKTable proxy = (GlobalKTable) proxyFactory.getProxy();
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@Override
public GlobalKTable createOutput(String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type GlobalKTable");
throw new UnsupportedOperationException(
"Outbound operations are not allowed on target type GlobalKTable");
}
/**
* Wrapper for GlobalKTable proxy.
*/
public interface GlobalKTableWrapper {
void wrap(GlobalKTable<Object, Object> delegate);
}
private static class GlobalKTableWrapperHandler implements GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private static class GlobalKTableWrapperHandler implements
GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private GlobalKTable<Object, Object> delegate;
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTable.class)) {
Assert.notNull(this.delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate, methodInvocation.getArguments());
if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTable.class)) {
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only GlobalKTable method invocations are permitted");
throw new IllegalStateException(
"Only GlobalKTable method invocations are permitted");
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,79 +16,115 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.atomic.AtomicReference;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.StreamsMetadata;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
* Services pertinent to the interactive query capabilities of Kafka Streams. This class provides
* services such as querying for a particular store, which instance is hosting a particular store etc.
* This is part of the public API of the kafka streams binder and the users can inject this service in their
* applications to make use of it.
* Services pertinent to the interactive query capabilities of Kafka Streams. This class
* provides services such as querying for a particular store, which instance is hosting a
* particular store etc. This is part of the public API of the kafka streams binder and
* the users can inject this service in their applications to make use of it.
*
* @author Soby Chacko
* @author Renwei Han
* @author Serhii Siryi
* @since 2.1.0
*/
public class InteractiveQueryService {
private static final Log LOG = LogFactory.getLog(InteractiveQueryService.class);
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
/**
* Constructor for InteractiveQueryService.
*
* @param kafkaStreamsRegistry holding {@link KafkaStreamsRegistry}
* @param binderConfigurationProperties kafka Streams binder configuration properties
*/
public InteractiveQueryService(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Retrieve and return a queryable store by name created in the application.
*
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
RetryTemplate retryTemplate = new RetryTemplate();
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate.execute(context -> {
T store = null;
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
Throwable throwable = null;
while (iterator.hasNext()) {
try {
store = iterator.next().store(storeName, storeType);
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
catch (InvalidStateStoreException ignored) {
//pass through
if (store != null) {
return store;
}
}
return null;
throw new IllegalStateException("Error when retrieving state store: j " + storeName, throwable);
});
}
/**
* Gets the current {@link HostInfo} that the calling kafka streams application is running on.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* when calling this method. If this is not available, then null is returned.
* Gets the current {@link HostInfo} that the calling kafka streams application is
* running on.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property when calling this method. If this is not available, then
* null is returned.
* @return the current {@link HostInfo}
*/
public HostInfo getCurrentHostInfo() {
Map<String, String> configuration = this.binderConfigurationProperties.getConfiguration();
Map<String, String> configuration = this.binderConfigurationProperties
.getConfiguration();
if (configuration.containsKey("application.server")) {
String applicationServer = configuration.get("application.server");
@@ -100,13 +136,14 @@ public class InteractiveQueryService {
}
/**
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may not be the
* current host that is running the application. Kafka Streams will look through all the consumer instances
* under the same application id and retrieves the proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then null maybe returned.
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may
* not be the current host that is running the application. Kafka Streams will look
* through all the consumer instances under the same application id and retrieves the
* proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property for all the application instances when calling this method.
* If this is not available, then null maybe returned.
* @param <K> generic type for key
* @param store store name
* @param key key to look for
@@ -114,13 +151,71 @@ public class InteractiveQueryService {
* @return the {@link HostInfo} where the key for the provided store is hosted currently
*/
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
StreamsMetadata streamsMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
final KeyQueryMetadata keyQueryMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map((k) -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst()
.orElse(null);
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
return keyQueryMetadata != null ? keyQueryMetadata.getActiveHost() : null;
}
/**
* Retrieves and returns the {@link KeyQueryMetadata} associated with the given combination of
* key and state store. If none found, it will return null.
*
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link KeyQueryMetadata} if available, null otherwise.
*/
public <K> KeyQueryMetadata getKeyQueryMetadata(String store, K key, Serializer<K> serializer) {
return this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
}
/**
* Retrieves and returns the {@link KafkaStreams} object that is associated with the given combination of
* key and state store. If none found, it will return null.
*
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return {@link KafkaStreams} object associated with this combination of store and key
*/
public <K> KafkaStreams getKafkaStreams(String store, K key, Serializer<K> serializer) {
final AtomicReference<KafkaStreams> kafkaStreamsAtomicReference = new AtomicReference<>();
this.kafkaStreamsRegistry.getKafkaStreams()
.forEach(k -> {
final KeyQueryMetadata keyQueryMetadata = k.queryMetadataForKey(store, key, serializer);
if (keyQueryMetadata != null) {
kafkaStreamsAtomicReference.set(k);
}
});
return kafkaStreamsAtomicReference.get();
}
/**
* Gets the list of {@link HostInfo} where the provided store is hosted on.
* It also can include current host info.
* Kafka Streams will look through all the consumer instances under the same application id
* and retrieves all hosts info.
*
* Note that the end-user applications must provide `application.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then an empty list will be returned.
*
* @param store store name
* @return the list of {@link HostInfo} where provided store is hosted on
*/
public List<HostInfo> getAllHostsInfo(String store) {
return kafkaStreamsRegistry.getKafkaStreams()
.stream()
.flatMap(k -> k.allMetadataForStore(store).stream())
.filter(Objects::nonNull)
.map(StreamsMetadata::hostInfo)
.collect(Collectors.toList());
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,14 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.springframework.aop.framework.Advised;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
@@ -37,11 +38,12 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KStream}.
* This implemenation extends from the {@link AbstractBinder} directly.
* {@link org.springframework.cloud.stream.binder.Binder} implementation for
* {@link KStream}. This implemenation extends from the {@link AbstractBinder} directly.
* <p>
* Provides both producer and consumer bindings for the bound KStream.
*
@@ -49,15 +51,22 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
*/
class KStreamBinder extends
// @checkstyle:off
AbstractBinder<KStream<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KStream<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<KStream<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private static final Log LOG = LogFactory.getLog(KStreamBinder.class);
private final KafkaTopicProvisioner kafkaTopicProvisioner;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
@@ -66,75 +75,115 @@ class KStreamBinder extends
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
}
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(inputTarget, properties.getExtension());
KStream<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
KStream<Object, Object> delegate = ((KStreamBoundElementFactory.KStreamWrapperHandler)
((Advised) inputTarget).getAdvisors()[0].getAdvice()).getDelegate();
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(delegate, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, this.kafkaStreamsDlqDispatchers);
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(), this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name, KStream<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
protected Binding<KStream<Object, Object>> doBindProducer(String name,
KStream<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties =
(ExtendedProducerProperties) properties;
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties, properties.getExtension());
to(properties.isUseNativeEncoding(), name, outboundBindTarget, (Serde<Object>) keySerde, (Serde<Object>) valueSerde);
Serde<?> keySerde = this.keyValueSerdeResolver
.getOuboundKeySerde(properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable(outboundBindTarget));
LOG.info("Key Serde used for (outbound) " + name + ": " + keySerde.getClass().getName());
Serde<?> valueSerde;
if (properties.isUseNativeEncoding()) {
valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable(outboundBindTarget));
}
else {
valueSerde = Serdes.ByteArray();
}
LOG.info("Value Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
(Serde<Object>) keySerde, (Serde<Object>) valueSerde, properties.getExtension());
return new DefaultBinding<>(name, null, outboundBindTarget, null);
}
@SuppressWarnings("unchecked")
private void to(boolean isNativeEncoding, String name, KStream<Object, Object> outboundBindTarget,
Serde<Object> keySerde, Serde<Object> valueSerde) {
private void to(boolean isNativeEncoding, String name,
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde, KafkaStreamsProducerProperties properties) {
final Produced<Object, Object> produced = Produced.with(keySerde, valueSerde);
StreamPartitioner streamPartitioner = null;
if (!StringUtils.isEmpty(properties.getStreamPartitionerBeanName())) {
streamPartitioner = getApplicationContext().getBean(properties.getStreamPartitionerBeanName(),
StreamPartitioner.class);
}
if (streamPartitioner != null) {
produced.withStreamPartitioner(streamPartitioner);
}
if (!isNativeEncoding) {
LOG.info("Native encoding is disabled for " + name + ". Outbound message conversion done by Spring Cloud Stream.");
this.kafkaStreamsMessageConversionDelegate.serializeOnOutbound(outboundBindTarget)
.to(name, Produced.with(keySerde, valueSerde));
LOG.info("Native encoding is disabled for " + name
+ ". Outbound message conversion done by Spring Cloud Stream.");
outboundBindTarget.filter((k, v) -> v == null)
.to(name, produced);
this.kafkaStreamsMessageConversionDelegate
.serializeOnOutbound(outboundBindTarget)
.to(name, produced);
}
else {
LOG.info("Native encoding is enabled for " + name + ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
LOG.info("Native encoding is enabled for " + name
+ ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, produced);
}
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedProducerProperties(channelName);
}
public void setKafkaStreamsExtendedBindingProperties(KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
@@ -145,6 +194,8 @@ class KStreamBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,9 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
@@ -39,28 +36,32 @@ import org.springframework.context.annotation.Import;
* @author Soby Chacko
*/
@Configuration
@Import({KafkaAutoConfiguration.class})
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class KStreamBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(kafkaStreamsBinderConfigurationProperties, kafkaProperties);
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(kafkaStreamsBinderConfigurationProperties,
kafkaProperties);
}
@Bean
public KStreamBinder kStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue,
keyValueSerdeResolver, kafkaStreamsDlqDispatchers);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
public KStreamBinder kStreamBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return kStreamBinder;
}
@@ -69,19 +70,22 @@ public class KStreamBinderConfiguration {
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is registered as first
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsMessageConversionDelegate.class.getSimpleName(), outerContext
.getBean(KafkaStreamsMessageConversionDelegate.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
beanFactory.registerSingleton(KeyValueSerdeResolver.class.getSimpleName(), outerContext
.getBean(KeyValueSerdeResolver.class));
beanFactory.registerSingleton(KafkaStreamsExtendedBindingProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsExtendedBindingProperties.class));
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsMessageConversionDelegate.class.getSimpleName(),
outerContext.getBean(KafkaStreamsMessageConversionDelegate.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
beanFactory.registerSingleton(KeyValueSerdeResolver.class.getSimpleName(),
outerContext.getBean(KeyValueSerdeResolver.class));
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
};
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -22,16 +22,18 @@ import org.apache.kafka.streams.kstream.KStream;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for{@link KStream}.
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory}
* for{@link KStream}.
*
* The implementation creates proxies for both input and output binding.
* The actual target will be created downstream through further binding process.
* The implementation creates proxies for both input and output binding. The actual target
* will be created downstream through further binding process.
*
* @author Marius Bogoevici
* @author Soby Chacko
@@ -41,18 +43,31 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
public KStream createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
}
@@ -60,6 +75,18 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
@SuppressWarnings("unchecked")
public KStream createOutput(final String name) {
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ProducerProperties producerProperties = bindingProperties.getProducer();
if (producerProperties == null) {
producerProperties = this.bindingServiceProperties.getProducerProperties(name);
producerProperties.setUseNativeEncoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isEncodingSettingProvided()) {
producerProperties.setUseNativeEncoding(true);
}
}
return createProxyForKStream(name);
}
@@ -70,9 +97,13 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
KStream proxy = (KStream) proxyFactory.getProxy();
//Add the binding properties to the catalogue for later retrieval during further binding steps downstream.
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy, bindingProperties);
// Add the binding properties to the catalogue for later retrieval during further
// binding steps downstream.
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy,
bindingProperties);
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@@ -85,29 +116,41 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
private static class KStreamWrapperHandler implements KStreamWrapper, MethodInterceptor {
static class KStreamWrapperHandler
implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
public void wrap(KStream<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KStream.class)) {
Assert.notNull(this.delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate, methodInvocation.getArguments());
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KStream method invocations are permitted");
throw new IllegalStateException(
"Only KStream method invocations are permitted");
}
}
KStream<Object, Object> getDelegate() {
return delegate;
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -28,21 +28,23 @@ import org.springframework.core.ResolvableType;
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerParameterAdapter implements StreamListenerParameterAdapter<KStream<?, ?>, KStream<?, ?>> {
class KStreamStreamListenerParameterAdapter
implements StreamListenerParameterAdapter<KStream<?, ?>, KStream<?, ?>> {
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
KStreamStreamListenerParameterAdapter(KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KStreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
public boolean supports(Class bindingTargetType, MethodParameter methodParameter) {
return KStream.class.isAssignableFrom(bindingTargetType)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
return KafkaStreamsBinderUtils.supportsKStream(methodParameter, bindingTargetType);
}
@Override
@@ -51,11 +53,14 @@ class KStreamStreamListenerParameterAdapter implements StreamListenerParameterAd
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
if (this.KafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(bindingTarget)) {
if (this.KafkaStreamsBindingInformationCatalogue
.isUseNativeDecoding(bindingTarget)) {
return bindingTarget;
}
else {
return this.kafkaStreamsMessageConversionDelegate.deserializeOnInbound(valueClass, bindingTarget);
return this.kafkaStreamsMessageConversionDelegate
.deserializeOnInbound(valueClass, bindingTarget);
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -29,16 +29,19 @@ import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
class KStreamStreamListenerResultAdapter implements
StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
return KStream.class.isAssignableFrom(resultType) && KStream.class.isAssignableFrom(boundElement);
return KStream.class.isAssignableFrom(resultType)
&& KStream.class.isAssignableFrom(boundElement);
}
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult, KStreamBoundElementFactory.KStreamWrapper boundElement) {
public Closeable adapt(KStream streamListenerResult,
KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult);
return new NoOpCloseable();
}
@@ -51,4 +54,5 @@ class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -32,62 +30,86 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KTable}.
* This implemenation extends from the {@link AbstractBinder} directly.
* {@link org.springframework.cloud.stream.binder.Binder} implementation for
* {@link KTable}. This implemenation extends from the {@link AbstractBinder} directly.
*
* Provides only consumer binding for the bound KTable as output bindings are not allowed on it.
* Provides only consumer binding for the bound KTable as output bindings are not allowed
* on it.
*
* @author Soby Chacko
*/
class KTableBinder extends
// @checkstyle:off
AbstractBinder<KTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
// @checkstyle:on
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group, KTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group,
KTable<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, this.kafkaStreamsDlqDispatchers);
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(), this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<KTable<Object, Object>> doBindProducer(String name, KTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for KTable");
protected Binding<KTable<Object, Object>> doBindProducer(String name,
KTable<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
throw new UnsupportedOperationException(
"No producer level binding is allowed for KTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedProducerProperties(channelName);
}
@Override
@@ -97,6 +119,12 @@ class KTableBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -21,13 +21,15 @@ import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for KTable binder.
@@ -36,33 +38,49 @@ import org.springframework.context.annotation.Configuration;
*/
@SuppressWarnings("ALL")
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public KTableBinder kTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties, kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
return kStreamBinder;
public KTableBinder kTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue);
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,11 +23,13 @@ import org.apache.kafka.streams.kstream.KTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link KTable}
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for
* {@link KTable}
*
* Input bindings are only created as output bindings on KTable are not allowed.
*
@@ -36,61 +38,92 @@ import org.springframework.util.Assert;
class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super(KTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
public KTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
KTableBoundElementFactory.KTableWrapperHandler wrapper = new KTableBoundElementFactory.KTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(KTableBoundElementFactory.KTableWrapper.class, KTable.class);
ProxyFactory proxyFactory = new ProxyFactory(
KTableBoundElementFactory.KTableWrapper.class, KTable.class);
proxyFactory.addAdvice(wrapper);
return (KTable) proxyFactory.getProxy();
final KTable proxy = (KTable) proxyFactory.getProxy();
this.kafkaStreamsBindingInformationCatalogue.addBindingNamePerTarget(proxy, name);
return proxy;
}
@Override
@SuppressWarnings("unchecked")
public KTable createOutput(final String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type KTable");
throw new UnsupportedOperationException(
"Outbound operations are not allowed on target type KTable");
}
/**
* Wrapper for KTable proxy.
*/
public interface KTableWrapper {
void wrap(KTable<Object, Object> delegate);
}
private static class KTableWrapperHandler implements KTableBoundElementFactory.KTableWrapper, MethodInterceptor {
private static class KTableWrapperHandler
implements KTableBoundElementFactory.KTableWrapper, MethodInterceptor {
private KTable<Object, Object> delegate;
public void wrap(KTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KTable.class)) {
Assert.notNull(this.delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate, methodInvocation.getArguments());
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KTableBoundElementFactory.KTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(KTableBoundElementFactory.KTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KTable method invocations are permitted");
throw new IllegalStateException(
"Only KTable method invocations are permitted");
}
}
}
}

View File

@@ -1,43 +0,0 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Application support configuration for Kafka Streams binder.
*
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
public TimeWindows configuredTimeWindow(KafkaStreamsApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength()).advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
}
}

View File

@@ -0,0 +1,186 @@
/*
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.time.Duration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Health indicator for Kafka Streams.
*
* @author Arnaud Jardiné
* @author Soby Chacko
*/
public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator implements DisposableBean {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties configurationProperties;
private final Map<String, Object> adminClientProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private static final ThreadLocal<Status> healthStatusThreadLocal = new ThreadLocal<>();
private AdminClient adminClient;
private final Lock lock = new ReentrantLock();
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super("Kafka-streams health check failed");
kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaStreamsBinderConfigurationProperties;
this.adminClientProperties = kafkaProperties.buildAdminProperties();
KafkaTopicProvisioner.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaStreamsBinderConfigurationProperties);
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
try {
this.lock.lock();
if (this.adminClient == null) {
this.adminClient = AdminClient.create(this.adminClientProperties);
}
final Status status = healthStatusThreadLocal.get();
//If one of the kafka streams binders (kstream, ktable, globalktable) was down before on the same request,
//retrieve that from the thead local storage where it was saved before. This is done in order to avoid
//the duration of the total health check since in the case of Kafka Streams each binder tries to do
//its own health check and since we already know that this is DOWN, simply pass that information along.
if (status == Status.DOWN) {
builder.withDetail("No topic information available", "Kafka broker is not reachable");
builder.status(Status.DOWN);
}
else {
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
builder.status(Status.UNKNOWN);
}
else {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
up &= kStream.state().isRunningOrRebalancing();
builder.withDetails(buildDetails(kStream));
}
builder.status(up ? Status.UP : Status.DOWN);
}
}
}
catch (Exception e) {
builder.withDetail("No topic information available", "Kafka broker is not reachable");
builder.status(Status.DOWN);
builder.withException(e);
//Store binder down status into a thread local storage.
healthStatusThreadLocal.set(Status.DOWN);
}
finally {
this.lock.unlock();
}
}
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) {
final Map<String, Object> details = new HashMap<>();
final Map<String, Object> perAppdIdDetails = new HashMap<>();
if (kafkaStreams.state().isRunningOrRebalancing()) {
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
perAppdIdDetails.put("threadName", metadata.threadName());
perAppdIdDetails.put("threadState", metadata.threadState());
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
perAppdIdDetails.put("consumerClientId", metadata.consumerClientId());
perAppdIdDetails.put("restoreConsumerClientId", metadata.restoreConsumerClientId());
perAppdIdDetails.put("producerClientIds", metadata.producerClientIds());
perAppdIdDetails.put("activeTasks", taskDetails(metadata.activeTasks()));
perAppdIdDetails.put("standbyTasks", taskDetails(metadata.standbyTasks()));
}
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, perAppdIdDetails);
}
else {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, String.format("The processor with application.id %s is down", applicationId));
}
return details;
}
private static Map<String, Object> taskDetails(Set<TaskMetadata> taskMetadata) {
final Map<String, Object> details = new HashMap<>();
for (TaskMetadata metadata : taskMetadata) {
details.put("taskId", metadata.taskId());
if (details.containsKey("partitions")) {
@SuppressWarnings("unchecked")
List<String> partitionsInfo = (List<String>) details.get("partitions");
partitionsInfo.addAll(addPartitionsInfo(metadata));
}
else {
details.put("partitions",
addPartitionsInfo(metadata));
}
}
return details;
}
private static List<String> addPartitionsInfo(TaskMetadata metadata) {
return metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList());
}
@Override
public void destroy() throws Exception {
if (adminClient != null) {
adminClient.close(Duration.ofSeconds(0));
}
}
}

View File

@@ -0,0 +1,48 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Configuration class for Kafka-streams binder health indicator beans.
*
* @author Arnaud Jardiné
*/
@Configuration
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
@ConditionalOnEnabledHealthIndicator("binders")
class KafkaStreamsBinderHealthIndicatorConfiguration {
@Bean
@ConditionalOnBean(KafkaStreamsRegistry.class)
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
KafkaStreamsRegistry kafkaStreamsRegistry, @Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties, KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry, kafkaStreamsBinderConfigurationProperties,
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
}
}

View File

@@ -0,0 +1,238 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.function.ToDoubleFunction;
import io.micrometer.core.instrument.FunctionCounter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.Meter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.apache.kafka.common.Metric;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.streams.KafkaStreams;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Kafka Streams binder metrics implementation that exports the metrics available
* through {@link KafkaStreams#metrics()} into a micrometer {@link io.micrometer.core.instrument.MeterRegistry}.
*
* Boot 2.2 users need to rely on this class for the metrics instead of direct support from Micrometer.
* Micrometer added Kafka Streams metrics support in 1.4.0 which Boot 2.3 includes.
* Therefore, the users who are on Boot 2.2, need to rely on these metrics.
* For users who are on 2.3 Boot, this class won't be activated (See the configuration for the various
* conditionals used).
*
* For the most part, this class is a copy of the Micrometer Kafka Streams support that was added in version 1.4.0.
* We will keep this class, as long as we support Boot 2.2.x.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBinderMetrics {
static final String DEFAULT_VALUE = "unknown";
static final String CLIENT_ID_TAG_NAME = "client-id";
static final String METRIC_GROUP_APP_INFO = "app-info";
static final String VERSION_METRIC_NAME = "version";
static final String START_TIME_METRIC_NAME = "start-time-ms";
static final String KAFKA_VERSION_TAG_NAME = "kafka-version";
static final String METRIC_NAME_PREFIX = "kafka.";
static final String METRIC_GROUP_METRICS_COUNT = "kafka-metrics-count";
private String kafkaVersion = DEFAULT_VALUE;
private String clientId = DEFAULT_VALUE;
private final MeterRegistry meterRegistry;
private MeterBinder meterBinder;
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private volatile Set<MetricName> currentMeters = new HashSet<>();
public KafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
public void bindTo(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
if (this.meterBinder == null) {
this.meterBinder = registry -> {
if (streamsBuilderFactoryBeans != null) {
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
prepareToBindMetrics(registry, metrics);
checkAndBindMetrics(registry, metrics);
}
}
};
}
this.meterBinder.bindTo(this.meterRegistry);
}
public void addMetrics(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
synchronized (KafkaStreamsBinderMetrics.this) {
this.bindTo(streamsBuilderFactoryBeans);
}
}
void prepareToBindMetrics(MeterRegistry registry, Map<MetricName, ? extends Metric> metrics) {
Metric startTime = null;
for (Map.Entry<MetricName, ? extends Metric> entry : metrics.entrySet()) {
MetricName name = entry.getKey();
if (clientId.equals(DEFAULT_VALUE) && name.tags().get(CLIENT_ID_TAG_NAME) != null) {
clientId = name.tags().get(CLIENT_ID_TAG_NAME);
}
if (METRIC_GROUP_APP_INFO.equals(name.group())) {
if (VERSION_METRIC_NAME.equals(name.name())) {
kafkaVersion = (String) entry.getValue().metricValue();
}
else if (START_TIME_METRIC_NAME.equals(name.name())) {
startTime = entry.getValue();
}
}
}
if (startTime != null) {
bindMeter(registry, startTime, meterName(startTime), meterTags(startTime));
}
}
private void bindMeter(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
if (name.endsWith("total") || name.endsWith("count")) {
registerCounter(registry, metric, name, tags);
}
else {
registerGauge(registry, metric, name, tags);
}
}
private void registerCounter(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
FunctionCounter.builder(name, metric, toMetricValue())
.tags(tags)
.description(metric.metricName().description())
.register(registry);
}
private ToDoubleFunction<Metric> toMetricValue() {
return metric -> ((Number) metric.metricValue()).doubleValue();
}
private void registerGauge(MeterRegistry registry, Metric metric, String name, Iterable<Tag> tags) {
Gauge.builder(name, metric, toMetricValue())
.tags(tags)
.description(metric.metricName().description())
.register(registry);
}
private List<Tag> meterTags(Metric metric) {
return meterTags(metric, false);
}
private String meterName(Metric metric) {
String name = METRIC_NAME_PREFIX + metric.metricName().group() + "." + metric.metricName().name();
return name.replaceAll("-metrics", "").replaceAll("-", ".");
}
private List<Tag> meterTags(Metric metric, boolean includeCommonTags) {
List<Tag> tags = new ArrayList<>();
metric.metricName().tags().forEach((key, value) -> tags.add(Tag.of(key, value)));
tags.add(Tag.of(KAFKA_VERSION_TAG_NAME, kafkaVersion));
return tags;
}
private boolean differentClient(List<Tag> tags) {
for (Tag tag : tags) {
if (tag.getKey().equals(CLIENT_ID_TAG_NAME)) {
if (!clientId.equals(tag.getValue())) {
return true;
}
}
}
return false;
}
void checkAndBindMetrics(MeterRegistry registry, Map<MetricName, ? extends Metric> metrics) {
if (!currentMeters.equals(metrics.keySet())) {
currentMeters = new HashSet<>(metrics.keySet());
metrics.forEach((name, metric) -> {
//Filter out non-numeric values
if (!(metric.metricValue() instanceof Number)) {
return;
}
//Filter out metrics from groups that include metadata
if (METRIC_GROUP_APP_INFO.equals(name.group())) {
return;
}
if (METRIC_GROUP_METRICS_COUNT.equals(name.group())) {
return;
}
String meterName = meterName(metric);
List<Tag> meterTagsWithCommonTags = meterTags(metric, true);
//Kafka has metrics with lower number of tags (e.g. with/without topic or partition tag)
//Remove meters with lower number of tags
boolean hasLessTags = false;
for (Meter other : registry.find(meterName).meters()) {
List<Tag> tags = other.getId().getTags();
// Only consider meters from the same client before filtering
if (differentClient(tags)) {
break;
}
if (tags.size() < meterTagsWithCommonTags.size()) {
registry.remove(other);
}
// Check if already exists
else if (tags.size() == meterTagsWithCommonTags.size()) {
if (tags.equals(meterTagsWithCommonTags)) {
return;
}
else {
break;
}
}
else {
hasLessTags = true;
}
}
if (hasLessTags) {
return;
}
bindMeter(registry, metric, meterName, meterTags(metric));
});
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,41 +16,71 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Constructor;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import io.micrometer.core.instrument.ImmutableTag;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.binder.kafka.KafkaStreamsMetrics;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingClass;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.Binder;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binder.kafka.streams.serde.MessageConverterDelegateSerde;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.integration.context.IntegrationContextUtils;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.lang.Nullable;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
import org.springframework.util.StringUtils;
/**
@@ -60,56 +90,80 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Gary Russell
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
@ConditionalOnBean(BindingService.class)
@AutoConfigureAfter(BindingServiceConfiguration.class)
public class KafkaStreamsBinderSupportAutoConfiguration {
private static final String KSTREAM_BINDER_TYPE = "kstream";
private static final String KTABLE_BINDER_TYPE = "ktable";
private static final String GLOBALKTABLE_BINDER_TYPE = "globalktable";
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(KafkaProperties kafkaProperties,
ConfigurableEnvironment environment,
BindingServiceProperties bindingServiceProperties) {
final Map<String, BinderConfiguration> binderConfigurations = getBinderConfigurations(bindingServiceProperties);
for (Map.Entry<String, BinderConfiguration> entry : binderConfigurations.entrySet()) {
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(
KafkaProperties kafkaProperties, ConfigurableEnvironment environment,
BindingServiceProperties properties, ConfigurableApplicationContext context) throws Exception {
final Map<String, BinderConfiguration> binderConfigurations = getBinderConfigurations(
properties);
for (Map.Entry<String, BinderConfiguration> entry : binderConfigurations
.entrySet()) {
final BinderConfiguration binderConfiguration = entry.getValue();
final String binderType = binderConfiguration.getBinderType();
if (binderType.equals(KSTREAM_BINDER_TYPE) ||
binderType.equals(KTABLE_BINDER_TYPE) ||
binderType.equals(GLOBALKTABLE_BINDER_TYPE)) {
if (binderType != null && (binderType.equals(KSTREAM_BINDER_TYPE)
|| binderType.equals(KTABLE_BINDER_TYPE)
|| binderType.equals(GLOBALKTABLE_BINDER_TYPE))) {
Map<String, Object> binderProperties = new HashMap<>();
this.flatten(null, binderConfiguration.getProperties(), binderProperties);
environment.getPropertySources().addFirst(new MapPropertySource("kafkaStreamsBinderEnv", binderProperties));
environment.getPropertySources().addFirst(
new MapPropertySource(entry.getKey() + "-kafkaStreamsBinderEnv", binderProperties));
Binder binder = new Binder(ConfigurationPropertySources.get(environment),
new PropertySourcesPlaceholdersResolver(environment),
IntegrationUtils.getConversionService(context.getBeanFactory()), null);
final Constructor<KafkaStreamsBinderConfigurationProperties> kafkaStreamsBinderConfigurationPropertiesConstructor =
ReflectionUtils.accessibleConstructor(KafkaStreamsBinderConfigurationProperties.class, KafkaProperties.class);
final KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties =
BeanUtils.instantiateClass(kafkaStreamsBinderConfigurationPropertiesConstructor, kafkaProperties);
final BindResult<KafkaStreamsBinderConfigurationProperties> bind = binder.bind("spring.cloud.stream.kafka.streams.binder", Bindable.ofInstance(kafkaStreamsBinderConfigurationProperties));
context.getBeanFactory().registerSingleton(
entry.getKey() + "-KafkaStreamsBinderConfigurationProperties",
bind.get());
}
}
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
//TODO: Lifted from core - good candidate for exposing as a utility method in core.
private static Map<String, BinderConfiguration> getBinderConfigurations(BindingServiceProperties bindingServiceProperties) {
// TODO: Lifted from core - good candidate for exposing as a utility method in core.
private static Map<String, BinderConfiguration> getBinderConfigurations(
BindingServiceProperties properties) {
Map<String, BinderConfiguration> binderConfigurations = new HashMap<>();
Map<String, BinderProperties> declaredBinders = bindingServiceProperties.getBinders();
Map<String, BinderProperties> declaredBinders = properties.getBinders();
for (Map.Entry<String, BinderProperties> binderEntry : declaredBinders.entrySet()) {
for (Map.Entry<String, BinderProperties> binderEntry : declaredBinders
.entrySet()) {
BinderProperties binderProperties = binderEntry.getValue();
binderConfigurations.put(binderEntry.getKey(),
new BinderConfiguration(binderProperties.getType(), binderProperties.getEnvironment(),
binderProperties.isInheritEnvironment(), binderProperties.isDefaultCandidate()));
new BinderConfiguration(binderProperties.getType(),
binderProperties.getEnvironment(),
binderProperties.isInheritEnvironment(),
binderProperties.isDefaultCandidate()));
}
return binderConfigurations;
}
//TODO: Lifted from core - good candidate for exposing as a utility method in core.
// TODO: Lifted from core - good candidate for exposing as a utility method in core.
@SuppressWarnings("unchecked")
private void flatten(String propertyName, Object value, Map<String, Object> flattenedProperties) {
private void flatten(String propertyName, Object value,
Map<String, Object> flattenedProperties) {
if (value instanceof Map) {
((Map<Object, Object>) value)
.forEach((k, v) -> flatten((propertyName != null ? propertyName + "." : "") + k, v, flattenedProperties));
((Map<Object, Object>) value).forEach((k, v) -> flatten(
(propertyName != null ? propertyName + "." : "") + k, v,
flattenedProperties));
}
else {
flattenedProperties.put(propertyName, value.toString());
@@ -117,64 +171,116 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
Environment environment) {
KafkaProperties kafkaProperties = binderConfigurationProperties.getKafkaProperties();
public KafkaStreamsConfiguration kafkaStreamsConfiguration(
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties properties,
Environment environment) {
KafkaProperties kafkaProperties = properties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
if (kafkaProperties.getStreams().getApplicationId() == null) {
String applicationName = environment.getProperty("spring.application.name");
if (applicationName != null) {
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationName);
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationName);
}
}
return new KafkaStreamsConfiguration(streamsProperties);
}
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
public Map<String, Object> streamConfigGlobalProperties(
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment,
SendToDlqAndContinue sendToDlqAndContinue) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
String kafkaConnectionString = configProperties.getKafkaConnectionString();
if (kafkaConnectionString != null && kafkaConnectionString.equals("localhost:9092")) {
//Making sure that the application indeed set a property.
String kafkaStreamsBinderBroker = environment.getProperty("spring.cloud.stream.kafka.streams.binder.brokers");
if (StringUtils.isEmpty(kafkaStreamsBinderBroker)) {
//Kafka Streams binder specific property for brokers is not set by the application.
//See if there is one configured at the kafka binder level.
String kafkaBinderBroker = environment.getProperty("spring.cloud.stream.kafka.binder.brokers");
if (!StringUtils.isEmpty(kafkaBinderBroker)) {
kafkaConnectionString = kafkaBinderBroker;
configProperties.setBrokers(kafkaConnectionString);
}
}
}
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
else {
Object bootstrapServerConfig = properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
Object bootstrapServerConfig = properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootstrapServerConfig instanceof String) {
@SuppressWarnings("unchecked")
String bootStrapServers = (String) properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
}
else if (bootstrapServerConfig instanceof List) {
List bootStrapCollection = (List) bootstrapServerConfig;
if (bootStrapCollection.size() == 1 && bootStrapCollection.get(0).equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
}
}
String binderProvidedApplicationId = binderConfigurationProperties.getApplicationId();
String binderProvidedApplicationId = configProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, binderProvidedApplicationId);
properties.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG,
Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,
Serdes.ByteArraySerde.class.getName());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class.getName());
if (configProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class.getName());
else if (configProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class.getName());
else if (configProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
properties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConfiguration())) {
properties.putAll(binderConfigurationProperties.getConfiguration());
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
properties.putAll(configProperties.getConfiguration());
}
Map<String, Object> mergedConsumerConfig = configProperties.mergedConsumerConfiguration();
if (!ObjectUtils.isEmpty(mergedConsumerConfig)) {
properties.putAll(mergedConsumerConfig);
}
Map<String, Object> mergedProducerConfig = configProperties.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedProducerConfig)) {
properties.putAll(mergedProducerConfig);
}
if (!properties.containsKey(StreamsConfig.REPLICATION_FACTOR_CONFIG)) {
properties.put(StreamsConfig.REPLICATION_FACTOR_CONFIG,
(int) configProperties.getReplicationFactor());
}
return properties.entrySet().stream().collect(
Collectors.toMap((e) -> String.valueOf(e.getKey()), Map.Entry::getValue));
@@ -187,8 +293,11 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public KStreamStreamListenerParameterAdapter kstreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue);
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(
kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
@@ -199,42 +308,61 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
ObjectProvider<CleanupConfig> cleanupConfig) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(bindingServiceProperties,
kafkaStreamsExtendedBindingProperties, keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
ObjectProvider<CleanupConfig> cleanupConfig,
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
cleanupConfig.getIfUnique());
cleanupConfig.getIfUnique(), customizerProvider.getIfUnique(), environment);
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverterFactory, sendToDlqAndContinue,
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverter, sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(CompositeMessageConverterFactory compositeMessageConverterFactory) {
public MessageConverterDelegateSerde messageConverterDelegateSerde(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new MessageConverterDelegateSerde(compositeMessageConverterFactory);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
public KStreamBoundElementFactory kStreamBoundElementFactory(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
return new KStreamBoundElementFactory(bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue);
KafkaStreamsBindingInformationCatalogue, encodingDecodingBindAdviceHandler);
}
@Bean
public KTableBoundElementFactory kTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new KTableBoundElementFactory(bindingServiceProperties);
public KTableBoundElementFactory kTableBoundElementFactory(
BindingServiceProperties bindingServiceProperties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KTableBoundElementFactory(bindingServiceProperties, encodingDecodingBindAdviceHandler, KafkaStreamsBindingInformationCatalogue);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new GlobalKTableBoundElementFactory(bindingServiceProperties);
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(
BindingServiceProperties properties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new GlobalKTableBoundElementFactory(properties, encodingDecodingBindAdviceHandler, KafkaStreamsBindingInformationCatalogue);
}
@Bean
@@ -249,20 +377,19 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
@SuppressWarnings("unchecked")
public KeyValueSerdeResolver keyValueSerdeResolver(@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
return new KeyValueSerdeResolver((Map<String, Object>) streamConfigGlobalProperties, kafkaStreamsBinderConfigurationProperties);
@ConditionalOnMissingBean
public KeyValueSerdeResolver keyValueSerdeResolver(
@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new KeyValueSerdeResolver(
(Map<String, Object>) streamConfigGlobalProperties, properties);
}
@Bean
public QueryableStoreRegistry queryableStoreTypeRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
return new QueryableStoreRegistry(kafkaStreamsRegistry);
}
@Bean
public InteractiveQueryService interactiveQueryServices(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new InteractiveQueryService(kafkaStreamsRegistry, binderConfigurationProperties);
public InteractiveQueryService interactiveQueryServices(
KafkaStreamsRegistry kafkaStreamsRegistry,
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new InteractiveQueryService(kafkaStreamsRegistry, properties);
}
@Bean
@@ -271,14 +398,134 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
@Bean
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
@Nullable StreamsListener listener) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
}
@Bean("kafkaStreamsDlqDispatchers")
public Map<String, KafkaStreamsDlqDispatch> dlqDispatchers() {
return new HashMap<>();
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
StreamFunctionProperties streamFunctionProperties,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), streamFunctionProperties, kafkaStreamsBinderConfigurationProperties,
customizerProvider.getIfUnique(), environment);
}
@Bean
public EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler() {
return new EncodingDecodingBindAdviceHandler();
}
@Configuration
@ConditionalOnMissingBean(value = KafkaStreamsBinderMetrics.class, name = "outerContext")
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfiguration {
@Bean
@ConditionalOnBean(MeterRegistry.class)
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
@ConditionalOnMissingClass("org.springframework.kafka.core.MicrometerConsumerListener")
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
return new KafkaStreamsBinderMetrics(meterRegistry);
}
@ConditionalOnClass(name = "org.springframework.kafka.core.MicrometerConsumerListener")
@ConditionalOnBean(MeterRegistry.class)
protected class KafkaMicrometer {
@Bean
@ConditionalOnMissingBean(name = "binderStreamsListener")
public StreamsListener binderStreamsListener(MeterRegistry meterRegistry) {
return new StreamsListener() {
private final Map<String, KafkaStreamsMetrics> metrics = new HashMap<>();
@Override
public synchronized void streamsAdded(String id, KafkaStreams kafkaStreams) {
if (!this.metrics.containsKey(id)) {
List<Tag> streamsTags = new ArrayList<>();
streamsTags.add(new ImmutableTag("spring.id", id));
this.metrics.put(id, new KafkaStreamsMetrics(kafkaStreams, streamsTags));
this.metrics.get(id).bindTo(meterRegistry);
}
}
@Override
public synchronized void streamsRemoved(String id, KafkaStreams streams) {
KafkaStreamsMetrics removed = this.metrics.remove(id);
if (removed != null) {
removed.close();
}
}
};
}
}
}
@Configuration
@ConditionalOnBean(name = "outerContext")
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfigurationWithMultiBinder {
@Bean
@ConditionalOnMissingClass("org.springframework.kafka.core.MicrometerConsumerListener")
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(ConfigurableApplicationContext context) {
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
.getBean(MeterRegistry.class);
return new KafkaStreamsBinderMetrics(meterRegistry);
}
@ConditionalOnClass(name = "org.springframework.kafka.core.MicrometerConsumerListener")
@ConditionalOnBean(MeterRegistry.class)
protected class KafkaMicrometer {
@Bean
@ConditionalOnMissingBean(name = "binderStreamsListener")
public StreamsListener binderStreamsListener(ConfigurableApplicationContext context) {
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
.getBean(MeterRegistry.class);
return new StreamsListener() {
private final Map<String, KafkaStreamsMetrics> metrics = new HashMap<>();
@Override
public synchronized void streamsAdded(String id, KafkaStreams kafkaStreams) {
if (!this.metrics.containsKey(id)) {
List<Tag> streamsTags = new ArrayList<>();
streamsTags.add(new ImmutableTag("spring.id", id));
this.metrics.put(id, new KafkaStreamsMetrics(kafkaStreams, streamsTags));
this.metrics.get(id).bindTo(meterRegistry);
}
}
@Override
public synchronized void streamsRemoved(String id, KafkaStreams streams) {
KafkaStreamsMetrics removed = this.metrics.remove(id);
if (removed != null) {
removed.close();
}
}
};
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,60 +16,186 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import java.util.function.BiFunction;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.context.ApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaOperations;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Common methods used by various Kafka Streams types across the binders.
*
* @author Soby Chacko
* @author Gary Russell
*/
final class KafkaStreamsBinderUtils {
private static final Log LOGGER = LogFactory.getLog(KafkaStreamsBinderUtils.class);
private KafkaStreamsBinderUtils() {
}
static void prepareConsumerBinding(String name, String group, ApplicationContext context,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
static void prepareConsumerBinding(String name, String group,
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
RetryTemplate retryTemplate,
ConfigurableListableBeanFactory beanFactory, String bindingName) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties =
(ExtendedConsumerProperties) properties;
if (binderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
// check for deserialization handler at the consumer binding, as that takes precedence.
final DeserializationExceptionHandler deserializationExceptionHandler =
properties.getExtension().getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group,
extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
Map<String, DlqPartitionFunction> partitionFunctions =
context.getBeansOfType(DlqPartitionFunction.class, false, false);
boolean oneFunctionPresent = partitionFunctions.size() == 1;
Integer dlqPartitions = extendedConsumerProperties.getExtension().getDlqPartitions();
DlqPartitionFunction partitionFunction = oneFunctionPresent
? partitionFunctions.values().iterator().next()
: DlqPartitionFunction.determineFallbackFunction(dlqPartitions, LOGGER);
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
extendedConsumerProperties.getExtension().getDlqProducerProperties()),
binderConfigurationProperties);
KafkaOperations<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
Map<String, DlqDestinationResolver> dlqDestinationResolvers =
context.getBeansOfType(DlqDestinationResolver.class, false, false);
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver =
dlqDestinationResolvers.isEmpty() ? (cr, e) -> new TopicPartition(extendedConsumerProperties.getExtension().getDlqName(),
partitionFunction.apply(group, cr, e)) :
(cr, e) -> new TopicPartition(dlqDestinationResolvers.values().iterator().next().apply(cr, e),
partitionFunction.apply(group, cr, e));
DeadLetterPublishingRecoverer kafkaStreamsBinderDlqRecoverer = !dlqDestinationResolvers.isEmpty() || !StringUtils
.isEmpty(extendedConsumerProperties.getExtension().getDlqName())
? new DeadLetterPublishingRecoverer(kafkaTemplate, destinationResolver)
: null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
extendedConsumerProperties.getExtension());
if (StringUtils.isEmpty(
extendedConsumerProperties.getExtension().getDlqName()) && dlqDestinationResolvers.isEmpty()) {
destinationResolver = (cr, e) -> new TopicPartition("error." + inputTopic + "." + group,
partitionFunction.apply(group, cr, e));
kafkaStreamsBinderDlqRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
destinationResolver);
}
SendToDlqAndContinue sendToDlqAndContinue = context.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
kafkaStreamsDlqDispatchers.put(inputTopic, kafkaStreamsDlqDispatch);
SendToDlqAndContinue sendToDlqAndContinue = context
.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic,
kafkaStreamsBinderDlqRecoverer);
}
}
if (!StringUtils.hasText(properties.getRetryTemplateName())) {
@SuppressWarnings("unchecked")
BeanDefinition retryTemplateBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<RetryTemplate>) retryTemplate.getClass(),
() -> retryTemplate)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(bindingName + "-RetryTemplate", retryTemplateBeanDefinition);
}
}
private static DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
Map<String, String> configs = producerProperties.getExtension().getConfiguration();
Assert.state(!configs.containsKey(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG),
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
if (!ObjectUtils.isEmpty(configs)) {
props.putAll(configs);
}
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
static boolean supportsKStream(MethodParameter methodParameter, Class<?> targetBeanClass) {
return KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,24 +16,28 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* A catalogue that provides binding information for Kafka Streams target types such as KStream.
* It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean} and
* {@link StreamsConfig} associated with various {@link org.springframework.cloud.stream.annotation.StreamListener}
* methods in the {@link org.springframework.context.ApplicationContext}.
* A catalogue that provides binding information for Kafka Streams target types such as
* KStream. It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean}
* and {@link StreamsConfig} associated with various
* {@link org.springframework.cloud.stream.annotation.StreamListener} methods in the
* {@link org.springframework.context.ApplicationContext}.
*
* @author Soby Chacko
*/
@@ -45,10 +49,15 @@ class KafkaStreamsBindingInformationCatalogue {
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
private final Map<Object, String> bindingNamesPerTarget = new HashMap<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination
* on the broker.
*
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
* broker.
* @param bindingTarget binding target for KStream
* @return destination topic on Kafka
*/
@@ -59,7 +68,6 @@ class KafkaStreamsBindingInformationCatalogue {
/**
* Is native decoding is enabled on this {@link KStream}.
*
* @param bindingTarget binding target for KStream
* @return true if native decoding is enabled, fasle otherwise.
*/
@@ -73,7 +81,6 @@ class KafkaStreamsBindingInformationCatalogue {
/**
* Is DLQ enabled for this {@link KStream}.
*
* @param bindingTarget binding target for KStream
* @return true if DLQ is enabled, false otherwise.
*/
@@ -83,7 +90,6 @@ class KafkaStreamsBindingInformationCatalogue {
/**
* Retrieve the content type associated with a given {@link KStream}.
*
* @param bindingTarget binding target for KStream
* @return content Type associated.
*/
@@ -94,28 +100,32 @@ class KafkaStreamsBindingInformationCatalogue {
/**
* Register a cache for bounded KStream -> {@link BindingProperties}.
*
* @param bindingTarget binding target for KStream
* @param bindingProperties {@link BindingProperties} for this KStream
*/
void registerBindingProperties(KStream<?, ?> bindingTarget, BindingProperties bindingProperties) {
this.bindingProperties.put(bindingTarget, bindingProperties);
void registerBindingProperties(KStream<?, ?> bindingTarget,
BindingProperties bindingProperties) {
if (bindingProperties != null) {
this.bindingProperties.put(bindingTarget, bindingProperties);
}
}
/**
* Register a cache for bounded KStream -> {@link KafkaStreamsConsumerProperties}.
*
* @param bindingTarget binding target for KStream
* @param kafkaStreamsConsumerProperties consumer properties for this KStream
*/
void registerConsumerProperties(KStream<?, ?> bindingTarget, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
void registerConsumerProperties(KStream<?, ?> bindingTarget,
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
if (kafkaStreamsConsumerProperties != null) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
}
}
/**
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}.
*
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean} mapped to the KStream
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean}
* mapped to the KStream
*/
void addStreamBuilderFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
this.streamsBuilderFactoryBeans.add(streamsBuilderFactoryBean);
@@ -124,4 +134,45 @@ class KafkaStreamsBindingInformationCatalogue {
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
return this.streamsBuilderFactoryBeans;
}
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
this.outboundKStreamResolvables.put(key, outboundResolvable);
}
ResolvableType getOutboundKStreamResolvable(Object key) {
return outboundKStreamResolvables.get(key);
}
/**
* Adding a mapping for KStream target to its corresponding KeySerde.
* This is used for sending to DLQ when deserialization fails. See {@link KafkaStreamsMessageConversionDelegate}
* for details.
*
* @param kStreamTarget target KStream
* @param keySerde Serde used for the key
*/
void addKeySerde(KStream<?, ?> kStreamTarget, Serde<?> keySerde) {
this.keySerdeInfo.put(kStreamTarget, keySerde);
}
Serde<?> getKeySerde(KStream<?, ?> kStreamTarget) {
return this.keySerdeInfo.get(kStreamTarget);
}
Map<KStream<?, ?>, BindingProperties> getBindingProperties() {
return bindingProperties;
}
Map<KStream<?, ?>, KafkaStreamsConsumerProperties> getConsumerProperties() {
return consumerProperties;
}
void addBindingNamePerTarget(Object target, String bindingName) {
this.bindingNamesPerTarget.put(target, bindingName);
}
String bindingNamePerTarget(Object target) {
return this.bindingNamesPerTarget.get(target);
}
}

View File

@@ -1,146 +0,0 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.ObjectUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
/**
* Send records in error to a DLQ.
*
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
class KafkaStreamsDlqDispatch {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaTemplate<byte[], byte[]> kafkaTemplate;
private final String dlqName;
KafkaStreamsDlqDispatch(String dlqName,
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(kafkaConsumerProperties.getDlqProducerProperties()),
kafkaBinderConfigurationProperties);
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
this.dlqName = dlqName;
}
@SuppressWarnings("unchecked")
public void sendToDlq(byte[] key, byte[] value, int partittion) {
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(this.dlqName, partittion,
key, value, null);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value)))
.append("'").append(" received from ")
.append(partittion);
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = null;
try {
sentDlq = this.kafkaTemplate.send(producerRecord);
sentDlq.addCallback(new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[], byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger.debug(
"Sent to DLQ " + sb.toString());
}
}
});
}
catch (Exception ex) {
if (sentDlq == null) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
}
}
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
//Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
private String toDisplayString(String original) {
if (original.length() <= 50) {
return original;
}
return original.substring(0, 50) + "...";
}
}

View File

@@ -0,0 +1,375 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderProcessor implements BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionProcessor.class);
private static final String OUTBOUND = "outbound";
private final BindingServiceProperties bindingServiceProperties;
private final Map<String, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private BeanFactory beanFactory;
private StreamFunctionProperties streamFunctionProperties;
private KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties;
StreamsBuilderFactoryBeanCustomizer customizer;
ConfigurableEnvironment environment;
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
CleanupConfig cleanupConfig,
StreamFunctionProperties streamFunctionProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanCustomizer customizer, ConfigurableEnvironment environment) {
super(bindingServiceProperties, kafkaStreamsBindingInformationCatalogue, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.streamFunctionProperties = streamFunctionProperties;
this.kafkaStreamsBinderConfigurationProperties = kafkaStreamsBinderConfigurationProperties;
this.customizer = customizer;
this.environment = environment;
}
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
if (resolvableType != null && resolvableType.getRawClass() != null) {
int inputCount = 1;
ResolvableType currentOutputGeneric;
if (resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class)) {
inputCount = 2;
currentOutputGeneric = resolvableType.getGeneric(2);
}
else {
currentOutputGeneric = resolvableType.getGeneric(1);
}
while (currentOutputGeneric.getRawClass() != null && functionOrConsumerFound(currentOutputGeneric)) {
inputCount++;
currentOutputGeneric = currentOutputGeneric.getGeneric(1);
}
final Set<String> inputs = new LinkedHashSet<>(kafkaStreamsBindableProxyFactory.getInputs());
final Iterator<String> iterator = inputs.iterator();
popuateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
ResolvableType iterableResType = resolvableType;
int i = resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class) ? 2 : 1;
ResolvableType outboundResolvableType;
if (i == inputCount) {
outboundResolvableType = iterableResType.getGeneric(i);
}
else {
while (i < inputCount && iterator.hasNext()) {
iterableResType = iterableResType.getGeneric(1);
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
popuateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
i++;
}
outboundResolvableType = iterableResType.getGeneric(1);
}
resolvableTypeMap.put(OUTBOUND, outboundResolvableType);
}
return resolvableTypeMap;
}
private boolean functionOrConsumerFound(ResolvableType iterableResType) {
return iterableResType.getRawClass().equals(Function.class) ||
iterableResType.getRawClass().equals(Consumer.class);
}
private void popuateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
Iterator<String> iterator) {
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
if (resolvableType.getRawClass() != null &&
(resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class))
&& iterator.hasNext()) {
resolvableTypeMap.put(iterator.next(), resolvableType.getGeneric(1));
}
}
/**
* This method must be kept stateless. In the case of multiple function beans in an application,
* isolated {@link KafkaStreamsBindableProxyFactory} instances are passed in separately for those functions. If the
* state is shared between invocations, that will create potential race conditions. Hence, invocations of this method
* should not be dependent on state modified by a previous invocation.
*
* @param resolvableType type of the binding
* @param functionName bean name of the function
* @param kafkaStreamsBindableProxyFactory bindable proxy factory for the Kafka Streams type
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType, kafkaStreamsBindableProxyFactory);
ResolvableType outboundResolvableType = stringResolvableTypeMap.remove(OUTBOUND);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
Consumer<Object> consumer = (Consumer) this.beanFactory.getBean(functionName);
consumer.accept(adaptedInboundArguments[0]);
}
else if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiConsumer.class)) {
BiConsumer<Object, Object> biConsumer = (BiConsumer) this.beanFactory.getBean(functionName);
biConsumer.accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Object result;
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiFunction.class)) {
BiFunction<Object, Object, Object> biFunction = (BiFunction) beanFactory.getBean(functionName);
result = biFunction.apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Function<Object, Object> function = (Function) beanFactory.getBean(functionName);
result = function.apply(adaptedInboundArguments[0]);
}
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
result = ((Function) result).apply(adaptedInboundArguments[i]);
}
else {
((Consumer) result).accept(adaptedInboundArguments[i]);
result = null;
}
i++;
}
if (result != null) {
final Set<String> outputs = new TreeSet<>(kafkaStreamsBindableProxyFactory.getOutputs());
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
if (result.getClass().isArray()) {
// Binding target as the output bindings were deferred in the KafkaStreamsBindableProxyFactory
// due to the fact that it didn't know the returned array size. At this point in the execution,
// we know exactly the number of outbound components (from the array length), so do the binding.
final int length = ((Object[]) result).length;
List<String> outputBindings = getOutputBindings(functionName, length);
Iterator<String> iterator = outputBindings.iterator();
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
Object[] outboundKStreams = (Object[]) result;
for (int ij = 0; ij < length; ij++) {
String next = iterator.next();
kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
registry.registerBeanDefinition(next, rootBeanDefinition1);
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) outboundKStreams[ij]);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
}
}
else {
if (outboundDefinitionIterator.hasNext()) {
final String next = outboundDefinitionIterator.next();
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
}
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException("Cannot setup function invoker for this Kafka Streams function.", ex);
}
}
private List<String> getOutputBindings(String functionName, int outputs) {
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings(functionName);
List<String> outputBindingNames = new ArrayList<>();
if (!CollectionUtils.isEmpty(outputBindings)) {
outputBindingNames.addAll(outputBindings);
return outputBindingNames;
}
else {
for (int i = 0; i < outputs; i++) {
outputBindingNames.add(String.format("%s-%s-%d", functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX, i));
}
}
return outputBindingNames;
}
@SuppressWarnings({"unchecked"})
private Object[] adaptAndRetrieveInboundArguments(Map<String, ResolvableType> stringResolvableTypeMap,
String functionName) {
Object[] arguments = new Object[stringResolvableTypeMap.size()];
int i = 0;
for (String input : stringResolvableTypeMap.keySet()) {
Class<?> parameterType = stringResolvableTypeMap.get(input).getRawClass();
if (input != null) {
Object targetBean = applicationContext.getBean(input);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(input);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext,
input, kafkaStreamsBinderConfigurationProperties, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.methodStreamsBuilderFactoryBeanMap.get(functionName);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties =
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
extendedConsumerProperties.setApplicationId(applicationId);
//get state store spec
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties, stringResolvableTypeMap.get(input));
LOG.info("Key Serde used for " + input + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(input).isUseNativeDecoding() ?
getValueSerde(input, extendedConsumerProperties, stringResolvableTypeMap.get(input)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + input + ": " + valueSerde.getClass().getName());
final Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(input, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getKStream(input, bindingProperties, extendedConsumerProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset, i == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper =
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =
(stringResolvableTypeMap.get(input).getGeneric(1).getRawClass() != null)
? (stringResolvableTypeMap.get(input).getGeneric(1).getRawClass()) : Object.class;
if (this.kafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(
(KStream<?, ?>) kStreamWrapper)) {
arguments[i] = stream;
}
else {
arguments[i] = this.kafkaStreamsMessageConversionDelegate.deserializeOnInbound(
valueClass, stream);
}
}
if (arguments[i] == null) {
arguments[i] = stream;
}
Assert.notNull(arguments[i], "Problems encountered while adapting the function argument.");
}
else {
handleKTableGlobalKTableInputs(arguments, i, input, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset, i == 0);
}
i++;
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,40 +19,46 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
/**
* Delegate for handling all framework level message conversions inbound and outbound on {@link KStream}.
* If native encoding is not enabled, then serialization will be performed on outbound messages based
* on a contentType. Similarly, if native decoding is not enabled, deserialization will be performed on
* inbound messages based on a contentType. Based on the contentType, a {@link MessageConverter} will
* be resolved.
* Delegate for handling all framework level message conversions inbound and outbound on
* {@link KStream}. If native encoding is not enabled, then serialization will be
* performed on outbound messages based on a contentType. Similarly, if native decoding is
* not enabled, deserialization will be performed on inbound messages based on a
* contentType. Based on the contentType, a {@link MessageConverter} will be resolved.
*
* @author Soby Chacko
*/
public class KafkaStreamsMessageConversionDelegate {
private static final Log LOG = LogFactory.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final Log LOG = LogFactory
.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
private final CompositeMessageConverterFactory compositeMessageConverterFactory;
private final CompositeMessageConverter compositeMessageConverter;
private final SendToDlqAndContinue sendToDlqAndContinue;
@@ -60,11 +66,14 @@ public class KafkaStreamsMessageConversionDelegate {
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
KafkaStreamsMessageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
Exception[] failedWithDeserException = new Exception[1];
KafkaStreamsMessageConversionDelegate(
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverter = compositeMessageConverter;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
this.kstreamBinderConfigurationProperties = kstreamBinderConfigurationProperties;
@@ -72,87 +81,142 @@ public class KafkaStreamsMessageConversionDelegate {
/**
* Serialize {@link KStream} records on outbound based on contentType.
*
* @param outboundBindTarget outbound KStream target
* @return serialized KStream
*/
@SuppressWarnings("rawtypes")
@SuppressWarnings({"rawtypes", "unchecked"})
public KStream serializeOnOutbound(KStream<?, ?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverterFactory.getMessageConverterForAllRegistered();
String contentType = this.kstreamBindingInformationCatalogue
.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget
.filter((k, v) -> v != null)
.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
kStreamWithEnrichedHeaders.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object key, Object value) {
if (perRecordContentTypeHolder.contentType != null) {
this.context.headers().remove(MessageHeaders.CONTENT_TYPE);
final Header header;
try {
header = new RecordHeader(MessageHeaders.CONTENT_TYPE,
new ObjectMapper().writeValueAsBytes(perRecordContentTypeHolder.contentType));
this.context.headers().add(header);
}
catch (Exception e) {
if (LOG.isDebugEnabled()) {
LOG.debug("Could not add content type header");
}
}
perRecordContentTypeHolder.unsetContentType();
}
}
@Override
public void close() {
return outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v :
MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
return
messageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
});
return kStreamWithEnrichedHeaders;
}
/**
* Deserialize incoming {@link KStream} based on content type.
*
* @param valueClass on KStream value
* @param bindingTarget inbound KStream target
* @return deserialized KStream
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
public KStream deserializeOnInbound(Class<?> valueClass, KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverterFactory.getMessageConverterForAllRegistered();
public KStream deserializeOnInbound(Class<?> valueClass,
KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
resolvePerRecordContentType(bindingTarget, perRecordContentTypeHolder);
//Deserialize using a branching strategy
// Deserialize using a branching strategy
KStream<?, ?>[] branch = bindingTarget.branch(
//First filter where the message is converted and return true if everything went well, return false otherwise.
(o, o2) -> {
boolean isValidRecord = false;
// First filter where the message is converted and return true if
// everything went well, return false otherwise.
(o, o2) -> {
boolean isValidRecord = false;
try {
//if the record is a tombstone, ignore and exit from processing further.
if (o2 != null) {
if (o2 instanceof Message || o2 instanceof String || o2 instanceof byte[]) {
Message<?> m1 = null;
if (o2 instanceof Message) {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.fromMessage((Message<?>) o2).setHeader(MessageHeaders.CONTENT_TYPE,
perRecordContentTypeHolder.contentType).build() : (Message<?>) o2;
try {
// if the record is a tombstone, ignore and exit from processing
// further.
if (o2 != null) {
if (o2 instanceof Message || o2 instanceof String
|| o2 instanceof byte[]) {
Message<?> m1 = null;
if (o2 instanceof Message) {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.fromMessage((Message<?>) o2)
.setHeader(
MessageHeaders.CONTENT_TYPE,
perRecordContentTypeHolder.contentType)
.build()
: (Message<?>) o2;
}
else {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.withPayload(o2).setHeader(
MessageHeaders.CONTENT_TYPE,
perRecordContentTypeHolder.contentType)
.build()
: MessageBuilder.withPayload(o2).build();
}
convertAndSetMessage(o, valueClass, messageConverter, m1);
}
else {
m1 = perRecordContentTypeHolder.contentType != null ? MessageBuilder.withPayload(o2)
.setHeader(MessageHeaders.CONTENT_TYPE, perRecordContentTypeHolder.contentType).build() : MessageBuilder.withPayload(o2).build();
keyValueThreadLocal.set(new KeyValue<>(o, o2));
}
convertAndSetMessage(o, valueClass, messageConverter, m1);
isValidRecord = true;
}
else {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
LOG.info(
"Received a tombstone record. This will be skipped from further processing.");
}
isValidRecord = true;
}
else {
LOG.info("Received a tombstone record. This will be skipped from further processing.");
catch (Exception e) {
LOG.warn(
"Deserialization has failed. This will be skipped from further processing.",
e);
// pass through
failedWithDeserException[0] = e;
}
}
catch (Exception e) {
LOG.warn("Deserialization has failed. This will be skipped from further processing.", e);
//pass through
}
return isValidRecord;
},
//second filter that catches any messages for which an exception thrown in the first filter above.
(k, v) -> true
);
//process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1]);
return isValidRecord;
},
// second filter that catches any messages for which an exception thrown
// in the first filter above.
(k, v) -> true);
// process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1], failedWithDeserException);
//first branch above is the branch where the messages are converted, let it go through further processing.
// first branch above is the branch where the messages are converted, let it go
// through further processing.
return branch[0].mapValues((o2) -> {
Object objectValue = keyValueThreadLocal.get().value;
keyValueThreadLocal.remove();
@@ -161,7 +225,8 @@ public class KafkaStreamsMessageConversionDelegate {
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void resolvePerRecordContentType(KStream<?, ?> outboundBindTarget, PerRecordContentTypeHolder perRecordContentTypeHolder) {
private void resolvePerRecordContentType(KStream<?, ?> outboundBindTarget,
PerRecordContentTypeHolder perRecordContentTypeHolder) {
outboundBindTarget.process(() -> new Processor() {
ProcessorContext context;
@@ -174,11 +239,14 @@ public class KafkaStreamsMessageConversionDelegate {
@Override
public void process(Object key, Object value) {
final Headers headers = this.context.headers();
final Iterable<Header> contentTypes = headers.headers(MessageHeaders.CONTENT_TYPE);
final Iterable<Header> contentTypes = headers
.headers(MessageHeaders.CONTENT_TYPE);
if (contentTypes != null && contentTypes.iterator().hasNext()) {
final String contentType = new String(contentTypes.iterator().next().value());
//remove leading and trailing quotes
final String cleanContentType = StringUtils.replace(contentType, "\"", "");
final String contentType = new String(
contentTypes.iterator().next().value());
// remove leading and trailing quotes
final String cleanContentType = StringUtils.replace(contentType, "\"",
"");
perRecordContentTypeHolder.setContentType(cleanContentType);
}
}
@@ -190,7 +258,8 @@ public class KafkaStreamsMessageConversionDelegate {
});
}
private void convertAndSetMessage(Object o, Class<?> valueClass, MessageConverter messageConverter, Message<?> msg) {
private void convertAndSetMessage(Object o, Class<?> valueClass,
MessageConverter messageConverter, Message<?> msg) {
Object result = valueClass.isAssignableFrom(msg.getPayload().getClass())
? msg.getPayload() : messageConverter.fromMessage(msg, valueClass);
@@ -200,7 +269,8 @@ public class KafkaStreamsMessageConversionDelegate {
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget, KStream<?, ?> branch) {
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget,
KStream<?, ?> branch, Exception[] exception) {
branch.process(() -> new Processor() {
ProcessorContext context;
@@ -211,24 +281,42 @@ public class KafkaStreamsMessageConversionDelegate {
@Override
public void process(Object o, Object o2) {
//Only continue if the record was not a tombstone.
// Only continue if the record was not a tombstone.
if (o2 != null) {
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue.isDlqEnabled(bindingTarget)) {
String destination = this.context.topic();
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue
.isDlqEnabled(bindingTarget)) {
if (o2 instanceof Message) {
Message message = (Message) o2;
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) message.getPayload(), this.context.partition());
// We need to convert the key to a byte[] before sending to DLQ.
Serde keySerde = kstreamBindingInformationCatalogue.getKeySerde(bindingTarget);
Serializer keySerializer = keySerde.serializer();
byte[] keyBytes = keySerializer.serialize(null, o);
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
keyBytes, message.getPayload());
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(consumerRecord, exception[0]);
}
else {
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) o2, this.context.partition());
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
o, o2);
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(consumerRecord, exception[0]);
}
}
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed. Stopping further processing of records.");
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed. "
+ "Stopping further processing of records.");
}
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
//quietly passing through. No action needed, this is similar to log and continue.
LOG.error("Inbound deserialization failed. Skipping this record and continuing.");
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
// quietly passing through. No action needed, this is similar to
// log and continue.
LOG.error(
"Inbound deserialization failed. Skipping this record and continuing.");
}
}
}
@@ -247,5 +335,11 @@ public class KafkaStreamsMessageConversionDelegate {
void setContentType(String contentType) {
this.contentType = contentType;
}
void unsetContentType() {
this.contentType = null;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,18 +16,28 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* An internal registry for holding {@KafkaStreams} objects maintained through
* An internal registry for holding {@link KafkaStreams} objects maintained through
* {@link StreamsBuilderFactoryManager}.
*
* @author Soby Chacko
*/
class KafkaStreamsRegistry {
public class KafkaStreamsRegistry {
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
@@ -37,10 +47,35 @@ class KafkaStreamsRegistry {
/**
* Register the {@link KafkaStreams} object created in the application.
*
* @param kafkaStreams {@link KafkaStreams} object created in the application
* @param streamsBuilderFactoryBean {@link StreamsBuilderFactoryBean}
*/
void registerKafkaStreams(KafkaStreams kafkaStreams) {
void registerKafkaStreams(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
this.kafkaStreams.add(kafkaStreams);
this.streamsBuilderFactoryBeanMap.put(kafkaStreams, streamsBuilderFactoryBean);
}
/**
*
* @param kafkaStreams {@link KafkaStreams} object
* @return Corresponding {@link StreamsBuilderFactoryBean}.
*/
StreamsBuilderFactoryBean streamBuilderFactoryBean(KafkaStreams kafkaStreams) {
return this.streamsBuilderFactoryBeanMap.get(kafkaStreams);
}
public StreamsBuilderFactoryBean streamsBuilderFactoryBean(String applicationId) {
final Optional<StreamsBuilderFactoryBean> first = this.streamsBuilderFactoryBeanMap.values()
.stream()
.filter(streamsBuilderFactoryBean -> streamsBuilderFactoryBean
.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG)
.equals(applicationId))
.findFirst();
return first.orElse(null);
}
public List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans() {
return new ArrayList<>(this.streamsBuilderFactoryBeanMap.values());
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,38 +17,28 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
@@ -60,16 +50,14 @@ import org.springframework.cloud.stream.binding.StreamListenerSetupMethodOrchest
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
@@ -78,20 +66,23 @@ import org.springframework.util.StringUtils;
/**
* Kafka Streams specific implementation for {@link StreamListenerSetupMethodOrchestrator}
* that overrides the default mechanisms for invoking StreamListener adapters.
*
* <p>
* The orchestration primarily focus on the following areas:
*
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one output values on {@link SendTo}
* 2. Allow multiple inbound bindings for multiple KStream and or KTable/GlobalKTable types.
* 3. Each StreamListener method that it orchestrates gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
* <p>
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one
* output values on {@link SendTo} 2. Allow multiple inbound bindings for multiple KStream
* and or KTable/GlobalKTable types. 3. Each StreamListener method that it orchestrates
* gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
*
* @author Soby Chacko
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStreamsBinderProcessor
implements StreamListenerSetupMethodOrchestrator {
private static final Log LOG = LogFactory.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private static final Log LOG = LogFactory
.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private final StreamListenerParameterAdapter streamListenerParameterAdapter;
@@ -105,38 +96,45 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final Map<Method, List<String>> registeredStoresPerMethod = new HashMap<>();
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final CleanupConfig cleanupConfig;
StreamsBuilderFactoryBeanCustomizer customizer;
private ConfigurableApplicationContext applicationContext;
private final ConfigurableEnvironment environment;
KafkaStreamsStreamListenerSetupMethodOrchestrator(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
CleanupConfig cleanupConfig) {
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig,
StreamsBuilderFactoryBeanCustomizer customizer,
ConfigurableEnvironment environment) {
super(bindingServiceProperties, bindingInformationCatalogue, extendedBindingProperties, keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = streamListenerResultAdapters;
this.cleanupConfig = cleanupConfig;
this.streamListenerResultAdapters = listenerResultAdapters;
this.customizer = customizer;
this.environment = environment;
}
@Override
public boolean supports(Method method) {
return methodParameterSupports(method) &&
(methodReturnTypeSuppports(method) || Void.TYPE.equals(method.getReturnType()));
return methodParameterSupports(method) && (methodReturnTypeSuppports(method)
|| Void.TYPE.equals(method.getReturnType()));
}
private boolean methodReturnTypeSuppports(Method method) {
Class<?> returnType = method.getReturnType();
if (returnType.equals(KStream.class) ||
(returnType.isArray() && returnType.getComponentType().equals(KStream.class))) {
if (returnType.equals(KStream.class) || (returnType.isArray()
&& returnType.getComponentType().equals(KStream.class))) {
return true;
}
return false;
@@ -157,12 +155,14 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
@Override
@SuppressWarnings({"rawtypes", "unchecked"})
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener, Method method, Object bean) {
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener,
Method method, Object bean) {
String[] methodAnnotatedOutboundNames = getOutboundBindingTargetNames(method);
validateStreamListenerMethod(streamListener, method, methodAnnotatedOutboundNames);
validateStreamListenerMethod(streamListener, method,
methodAnnotatedOutboundNames);
String methodAnnotatedInboundName = streamListener.value();
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method, methodAnnotatedInboundName,
this.applicationContext,
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method,
methodAnnotatedInboundName, this.applicationContext,
this.streamListenerParameterAdapter);
try {
ReflectionUtils.makeAccessible(method);
@@ -172,40 +172,53 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
else {
Object result = method.invoke(bean, adaptedInboundArguments);
if (result.getClass().isArray()) {
Assert.isTrue(methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[i++]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream, targetBean);
break;
}
}
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
if (result.getClass().isArray()) {
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
}
else {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[0]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(result.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(result, targetBean);
break;
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[i++]);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
adaptStreamListenerResult(outboundKStream, targetBean);
}
}
else {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
adaptStreamListenerResult(result, targetBean);
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException("Cannot setup StreamListener for " + method, ex);
throw new BeanInitializationException(
"Cannot setup StreamListener for " + method, ex);
}
}
@SuppressWarnings("unchecked")
private void adaptStreamListenerResult(Object outboundKStream, Object targetBean) {
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
}
@@ -213,95 +226,94 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
@SuppressWarnings({"unchecked"})
public Object[] adaptAndRetrieveInboundArguments(Method method, String inboundName,
ApplicationContext applicationContext,
StreamListenerParameterAdapter... streamListenerParameterAdapters) {
StreamListenerParameterAdapter... adapters) {
Object[] arguments = new Object[method.getParameterTypes().length];
for (int parameterIndex = 0; parameterIndex < arguments.length; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, parameterIndex);
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Class<?> parameterType = methodParameter.getParameterType();
Object targetReferenceValue = null;
if (methodParameter.hasParameterAnnotation(Input.class)) {
targetReferenceValue = AnnotationUtils.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter.getParameterAnnotation(Input.class);
targetReferenceValue = AnnotationUtils
.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter
.getParameterAnnotation(Input.class);
inboundName = methodAnnotation.value();
}
else if (arguments.length == 1 && StringUtils.hasText(inboundName)) {
targetReferenceValue = inboundName;
}
if (targetReferenceValue != null) {
Assert.isInstanceOf(String.class, targetReferenceValue, "Annotation value must be a String");
Object targetBean = applicationContext.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(inboundName);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
Assert.isInstanceOf(String.class, targetReferenceValue,
"Annotation value must be a String");
Object targetBean = applicationContext
.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
// Retrieve the StreamsConfig created for this method if available.
// Otherwise, create the StreamsBuilderFactory and get the underlying
// config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
buildStreamsBuilderAndRetrieveConfig(method, applicationContext, inboundName);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(method.getDeclaringClass().getSimpleName() + "-" + method.getName(),
applicationContext,
inboundName, null, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap.get(method);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
.get(method);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(inboundName);
//get state store spec
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
extendedConsumerProperties.setApplicationId(applicationId);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest : autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest : autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default: break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding " + inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter));
LOG.info("Key Serde used for " + targetReferenceValue + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding() ?
getValueSerde(inboundName, extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + targetReferenceValue + ": " + valueSerde.getClass().getName());
Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(inboundName, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec, bindingProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset);
KStream<?, ?> stream = getkStream(inboundName, spec,
bindingProperties, extendedConsumerProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset, parameterIndex == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : streamListenerParameterAdapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(), methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter.adapt(kStreamWrapper, methodParameter);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde(stream, keySerde);
BindingProperties bindingProperties1 = this.kafkaStreamsBindingInformationCatalogue.getBindingProperties().get(kStreamWrapper);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(stream, methodParameter);
break;
}
}
if (arguments[parameterIndex] == null && parameterType.isAssignableFrom(stream.getClass())) {
if (arguments[parameterIndex] == null
&& parameterType.isAssignableFrom(stream.getClass())) {
arguments[parameterIndex] = stream;
}
Assert.notNull(arguments[parameterIndex], "Cannot convert argument " + parameterIndex + " of " + method
+ "from " + stream.getClass() + " to " + parameterType);
Assert.notNull(arguments[parameterIndex],
"Cannot convert argument " + parameterIndex + " of "
+ method + "from " + stream.getClass() + " to "
+ parameterType);
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(inboundName);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper = (KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper = (GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
else {
handleKTableGlobalKTableInputs(arguments, parameterIndex, inboundName, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset, parameterIndex == 0);
}
}
catch (Exception ex) {
@@ -309,67 +321,41 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
throw new IllegalStateException(
StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAs(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
try {
Serde<?> keySerde = this.keyValueSerdeResolver.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver.getStateStoreValueSerde(spec.getValueSerdeString());
Serde<?> keySerde = this.keyValueSerdeResolver
.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver
.getStateStoreValueSerde(spec.getValueSerdeString());
StoreBuilder builder;
switch (spec.getType()) {
case KEYVALUE:
builder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(spec.getName()), keySerde, valueSerde);
break;
case WINDOW:
builder = Stores.windowStoreBuilder(Stores.persistentWindowStore(spec.getName(), spec.getRetention(), 3, spec.getLength(), false),
keySerde,
builder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(spec.getName()), keySerde,
valueSerde);
break;
case WINDOW:
builder = Stores
.windowStoreBuilder(
Stores.persistentWindowStore(spec.getName(),
spec.getRetention(), 3, spec.getLength(), false),
keySerde, valueSerde);
break;
case SESSION:
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(spec.getName(), spec.getRetention()), keySerde, valueSerde);
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(
spec.getName(), spec.getRetention()), keySerde, valueSerde);
break;
default:
throw new UnsupportedOperationException("state store type (" + spec.getType() + ") is not supported!");
throw new UnsupportedOperationException(
"state store type (" + spec.getType() + ") is not supported!");
}
if (spec.isCacheEnabled()) {
builder = builder.withCachingEnabled();
@@ -385,10 +371,12 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
}
private KStream<?, ?> getkStream(String inboundName, KafkaStreamsStateStoreProperties storeSpec,
private KStream<?, ?> getkStream(String inboundName,
KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde,
Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (storeSpec != null) {
StoreBuilder storeBuilder = buildStateStore(storeSpec);
streamsBuilder.addStateStore(storeBuilder);
@@ -396,102 +384,18 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream =
streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName + ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName + ". Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
return getKStream(inboundName, bindingProperties, kafkaStreamsConsumerProperties, streamsBuilder,
keySerde, valueSerde, autoOffsetReset, firstBuild);
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(Method method, ApplicationContext applicationContext,
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties", Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName).getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext.getBean("kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS, kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration, this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" + method.getName(), streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" + method.getName(), StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
private void validateStreamListenerMethod(StreamListener streamListener, Method method, String[] methodAnnotatedOutboundNames) {
private void validateStreamListenerMethod(StreamListener streamListener,
Method method, String[] methodAnnotatedOutboundNames) {
String methodAnnotatedInboundName = streamListener.value();
if (methodAnnotatedOutboundNames != null) {
for (String s : methodAnnotatedOutboundNames) {
if (StringUtils.hasText(s)) {
Assert.isTrue(isDeclarativeOutput(method, s), "Method must be declarative");
Assert.isTrue(isDeclarativeOutput(method, s),
"Method must be declarative");
}
}
}
@@ -499,8 +403,11 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
int methodArgumentsLength = method.getParameterTypes().length;
for (int parameterIndex = 0; parameterIndex < methodArgumentsLength; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, parameterIndex);
Assert.isTrue(isDeclarativeInput(methodAnnotatedInboundName, methodParameter), "Method must be declarative");
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Assert.isTrue(
isDeclarativeInput(methodAnnotatedInboundName, methodParameter),
"Method must be declarative");
}
}
}
@@ -512,7 +419,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
if (returnType.isArray()) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch((slpa) -> slpa.supports(returnType.getComponentType(), targetBeanClass));
.anyMatch((slpa) -> slpa.supports(returnType.getComponentType(),
targetBeanClass));
return declarative;
}
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
@@ -522,10 +430,23 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeInput(String targetBeanName, MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class) && this.applicationContext.containsBean(targetBeanName)) {
private boolean isDeclarativeInput(String targetBeanName,
MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class)
&& this.applicationContext.containsBean(targetBeanName)) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
return this.streamListenerParameterAdapter.supports(targetBeanClass, methodParameter);
if (targetBeanClass != null) {
boolean supports = KafkaStreamsBinderUtils.supportsKStream(methodParameter, targetBeanClass);
if (!supports) {
supports = KTable.class.isAssignableFrom(targetBeanClass)
&& KTable.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = GlobalKTable.class.isAssignableFrom(targetBeanClass)
&& GlobalKTable.class.isAssignableFrom(methodParameter.getParameterType());
}
}
return supports;
}
}
return false;
}
@@ -533,30 +454,38 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private static String[] getOutboundBindingTargetNames(Method method) {
SendTo sendTo = AnnotationUtils.findAnnotation(method, SendTo.class);
if (sendTo != null) {
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()), StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1, "At least one outbound destination need to be provided.");
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()),
StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1,
"At least one outbound destination need to be provided.");
return sendTo.value();
}
return null;
}
@SuppressWarnings({"unchecked"})
private static KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method, KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
private KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
if (!this.registeredStoresPerMethod.containsKey(method)) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method,
KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
this.registeredStoresPerMethod.put(method, new ArrayList<>());
this.registeredStoresPerMethod.get(method).add(spec.name());
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
}
}
return null;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,84 +16,135 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.util.ClassUtils;
import org.springframework.util.StringUtils;
/**
* Resolver for key and value Serde.
*
* On the inbound, if native decoding is enabled, then any deserialization on the value is handled by Kafka.
* First, we look for any key/value Serde set on the binding itself, if that is not available then look at the
* common Serde set at the global level. If that fails, it falls back to byte[].
* If native decoding is disabled, then the binder will do the deserialization on value and ignore any Serde set for value
* and rely on the contentType provided. Keys are always deserialized at the broker.
* On the inbound, if native decoding is enabled, then any deserialization on the value is
* handled by Kafka. First, we look for any key/value Serde set on the binding itself, if
* that is not available then look at the common Serde set at the global level. If that
* fails, it falls back to byte[]. If native decoding is disabled, then the binder will do
* the deserialization on value and ignore any Serde set for value and rely on the
* contentType provided. Keys are always deserialized at the broker.
*
*
* Same rules apply on the outbound. If native encoding is enabled, then value serialization is done at the broker using
* any binder level Serde for value, if not using common Serde, if not, then byte[].
* If native encoding is disabled, then the binder will do serialization using a contentType. Keys are always serialized
* by the broker.
* Same rules apply on the outbound. If native encoding is enabled, then value
* serialization is done at the broker using any binder level Serde for value, if not
* using common Serde, if not, then byte[]. If native encoding is disabled, then the
* binder will do serialization using a contentType. Keys are always serialized by the
* broker.
*
* For state store, use serdes class specified in
* {@link org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore} to create Serde accordingly.
* {@link org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore}
* to create Serde accordingly.
*
* @author Soby Chacko
* @author Lei Chen
*/
class KeyValueSerdeResolver {
public class KeyValueSerdeResolver implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(KeyValueSerdeResolver.class);
private final Map<String, Object> streamConfigGlobalProperties;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private ConfigurableApplicationContext context;
KeyValueSerdeResolver(Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.streamConfigGlobalProperties = streamConfigGlobalProperties;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Provide the {@link Serde} for inbound key.
*
* @param extendedConsumerProperties binding level extended {@link KafkaStreamsConsumerProperties}
* @param extendedConsumerProperties binding level extended
* {@link KafkaStreamsConsumerProperties}
* @return configurd {@link Serde} for the inbound key.
*/
public Serde<?> getInboundKeySerde(KafkaStreamsConsumerProperties extendedConsumerProperties) {
public Serde<?> getInboundKeySerde(
KafkaStreamsConsumerProperties extendedConsumerProperties) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString);
}
public Serde<?> getInboundKeySerde(
KafkaStreamsConsumerProperties extendedConsumerProperties, ResolvableType resolvableType) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString, resolvableType);
}
/**
* Provide the {@link Serde} for inbound value.
*
* @param consumerProperties {@link ConsumerProperties} on binding
* @param extendedConsumerProperties binding level extended {@link KafkaStreamsConsumerProperties}
* @param extendedConsumerProperties binding level extended
* {@link KafkaStreamsConsumerProperties}
* @return configurd {@link Serde} for the inbound value.
*/
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties, KafkaStreamsConsumerProperties extendedConsumerProperties) {
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties,
KafkaStreamsConsumerProperties extendedConsumerProperties) {
Serde<?> valueSerde;
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null &&
consumerProperties.isUseNativeDecoding()) {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString);
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties,
KafkaStreamsConsumerProperties extendedConsumerProperties,
ResolvableType resolvableType) {
Serde<?> valueSerde;
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString, resolvableType);
}
else {
valueSerde = Serdes.ByteArray();
}
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -103,7 +154,6 @@ class KeyValueSerdeResolver {
/**
* Provide the {@link Serde} for outbound key.
*
* @param properties binding level extended {@link KafkaStreamsProducerProperties}
* @return configurd {@link Serde} for the outbound key.
*/
@@ -111,23 +161,47 @@ class KeyValueSerdeResolver {
return getKeySerde(properties.getKeySerde());
}
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties, ResolvableType resolvableType) {
return getKeySerde(properties.getKeySerde(), resolvableType);
}
/**
* Provide the {@link Serde} for outbound value.
*
* @param producerProperties {@link ProducerProperties} on binding
* @param kafkaStreamsProducerProperties binding level extended {@link KafkaStreamsProducerProperties}
* @param kafkaStreamsProducerProperties binding level extended
* {@link KafkaStreamsProducerProperties}
* @return configurd {@link Serde} for the outbound value.
*/
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties, KafkaStreamsProducerProperties kafkaStreamsProducerProperties) {
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties,
KafkaStreamsProducerProperties kafkaStreamsProducerProperties) {
Serde<?> valueSerde;
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(kafkaStreamsProducerProperties.getValueSerde());
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde());
}
else {
valueSerde = Serdes.ByteArray();
}
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties,
KafkaStreamsProducerProperties kafkaStreamsProducerProperties, ResolvableType resolvableType) {
Serde<?> valueSerde;
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde(), resolvableType);
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -137,7 +211,6 @@ class KeyValueSerdeResolver {
/**
* Provide the {@link Serde} for state store.
*
* @param keySerdeString serde class used for key
* @return {@link Serde} for the state store key.
*/
@@ -147,7 +220,6 @@ class KeyValueSerdeResolver {
/**
* Provide the {@link Serde} for state store value.
*
* @param valueSerdeString serde class used for value
* @return {@link Serde} for the state store value.
*/
@@ -167,8 +239,7 @@ class KeyValueSerdeResolver {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
keySerde = this.binderConfigurationProperties.getConfiguration().containsKey("default.key.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("default.key.serde"), Serde.class) : Serdes.ByteArray();
keySerde = getFallbackSerde("default.key.serde");
}
keySerde.configure(this.streamConfigGlobalProperties, true);
@@ -179,15 +250,184 @@ class KeyValueSerdeResolver {
return keySerde;
}
private Serde<?> getValueSerde(String valueSerdeString) throws ClassNotFoundException {
private Serde<?> getKeySerde(String keySerdeString, ResolvableType resolvableType) {
Serde<?> keySerde = null;
try {
if (StringUtils.hasText(keySerdeString)) {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
if (resolvableType != null &&
(isResolvalbeKafkaStreamsType(resolvableType) || isResolvableKStreamArrayType(resolvableType))) {
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(0) : resolvableType.getGeneric(0);
Serde<?> fallbackSerde = getFallbackSerde("default.key.serde");
keySerde = getSerde(generic, fallbackSerde);
}
if (keySerde == null) {
keySerde = Serdes.ByteArray();
}
}
keySerde.configure(this.streamConfigGlobalProperties, true);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return keySerde;
}
private boolean isResolvableKStreamArrayType(ResolvableType resolvableType) {
return resolvableType.isArray() &&
KStream.class.isAssignableFrom(resolvableType.getComponentType().getRawClass());
}
private boolean isResolvalbeKafkaStreamsType(ResolvableType resolvableType) {
return resolvableType.getRawClass() != null && (KStream.class.isAssignableFrom(resolvableType.getRawClass()) || KTable.class.isAssignableFrom(resolvableType.getRawClass()) ||
GlobalKTable.class.isAssignableFrom(resolvableType.getRawClass()));
}
private Serde<?> getSerde(ResolvableType generic, Serde<?> fallbackSerde) {
Serde<?> serde = null;
Map<String, Serde> beansOfType = context.getBeansOfType(Serde.class);
Serde<?>[] serdeBeans = new Serde<?>[1];
final Class<?> genericRawClazz = generic.getRawClass();
beansOfType.forEach((k, v) -> {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(k))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method[] methods = classObj.getMethods();
Optional<Method> serdeBeanMethod = Arrays.stream(methods).filter(m -> m.getName().equals(k)).findFirst();
if (serdeBeanMethod.isPresent()) {
Method method = serdeBeanMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
ResolvableType serdeBeanGeneric = resolvableType.getGeneric(0);
Class<?> serdeGenericRawClazz = serdeBeanGeneric.getRawClass();
if (serdeGenericRawClazz != null && genericRawClazz != null) {
if (serdeGenericRawClazz.isAssignableFrom(genericRawClazz)) {
serdeBeans[0] = v;
}
}
}
}
catch (Exception e) {
// Pass through...
}
});
if (serdeBeans[0] != null) {
return serdeBeans[0];
}
if (genericRawClazz != null) {
if (Integer.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Integer();
}
else if (Long.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Long();
}
else if (Short.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Short();
}
else if (Double.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Double();
}
else if (Float.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Float();
}
else if (byte[].class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.ByteArray();
}
else if (String.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.String();
}
else if (UUID.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.UUID();
}
else if (!isSerdeFromStandardDefaults(fallbackSerde)) {
//User purposely set a default serde that is not one of the above
serde = fallbackSerde;
}
else {
// If the type is Object, then skip assigning the JsonSerde and let the fallback mechanism takes precedence.
if (!genericRawClazz.isAssignableFrom((Object.class))) {
serde = new JsonSerde(genericRawClazz);
}
}
}
return serde;
}
private boolean isSerdeFromStandardDefaults(Serde<?> serde) {
if (serde != null) {
if (Number.class.isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.ByteArray().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.String().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.UUID().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
}
return false;
}
private Serde<?> getValueSerde(String valueSerdeString)
throws ClassNotFoundException {
Serde<?> valueSerde;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
valueSerde = this.binderConfigurationProperties.getConfiguration().containsKey("default.value.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("default.value.serde"), Serde.class) : Serdes.ByteArray();
valueSerde = getFallbackSerde("default.value.serde");
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
return valueSerde;
}
private Serde<?> getFallbackSerde(String s) throws ClassNotFoundException {
return this.binderConfigurationProperties.getConfiguration()
.containsKey(s)
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get(s),
Serde.class)
: Serdes.ByteArray();
}
@SuppressWarnings("unchecked")
private Serde<?> getValueSerde(String valueSerdeString, ResolvableType resolvableType)
throws ClassNotFoundException {
Serde<?> valueSerde = null;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
if (resolvableType != null && ((isResolvalbeKafkaStreamsType(resolvableType)) ||
(isResolvableKStreamArrayType(resolvableType)))) {
Serde<?> fallbackSerde = getFallbackSerde("default.value.serde");
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(1) : resolvableType.getGeneric(1);
valueSerde = getSerde(generic, fallbackSerde);
}
if (valueSerde == null) {
valueSerde = Serdes.ByteArray();
}
}
valueSerde.configure(streamConfigGlobalProperties, false);
return valueSerde;
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
context = (ConfigurableApplicationContext) applicationContext;
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.2
*/
@Configuration
public class MultiBinderPropertiesConfiguration {
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
@ConditionalOnBean(name = "outerContext")
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
}

View File

@@ -1,65 +0,0 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.QueryableStoreType;
/**
* Registry that contains {@link QueryableStoreType}s those created from
* the user applications.
*
* @author Soby Chacko
* @author Renwei Han
* @since 2.0.0
* @deprecated in favor of {@link InteractiveQueryService}
*/
public class QueryableStoreRegistry {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public QueryableStoreRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
/**
* Retrieve and return a queryable store by name created in the application.
*
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
* @deprecated in favor of {@link InteractiveQueryService#getQueryableStore(String, QueryableStoreType)}
*/
public <T> T getQueryableStoreType(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
//pass through
}
}
return null;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,99 +16,48 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.internals.ProcessorContextImpl;
import org.apache.kafka.streams.processor.internals.StreamTask;
import org.springframework.util.ReflectionUtils;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
/**
* Custom implementation for {@link DeserializationExceptionHandler} that sends the records
* in error to a DLQ topic, then continue stream processing on new records.
* Custom implementation for {@link ConsumerRecordRecoverer} that keeps a collection of
* recoverer objects per input topics. These topics might be per input binding or multiplexed
* topics in a single binding.
*
* @author Soby Chacko
* @since 2.0.0
*/
public class SendToDlqAndContinue implements DeserializationExceptionHandler {
public class SendToDlqAndContinue implements ConsumerRecordRecoverer {
/**
* Key used for DLQ dispatchers.
* DLQ dispatcher per topic in the application context. The key here is not the actual
* DLQ topic but the incoming topic that caused the error.
*/
public static final String KAFKA_STREAMS_DLQ_DISPATCHERS = "spring.cloud.stream.kafka.streams.dlq.dispatchers";
/**
* DLQ dispatcher per topic in the application context. The key here is not the actual DLQ topic
* but the incoming topic that caused the error.
*/
private Map<String, KafkaStreamsDlqDispatch> dlqDispatchers = new HashMap<>();
private Map<String, DeadLetterPublishingRecoverer> dlqDispatchers = new HashMap<>();
/**
* For a given topic, send the key/value record to DLQ topic.
*
* @param topic incoming topic that caused the error
* @param key to send
* @param value to send
* @param partition for the topic where this record should be sent
* @param consumerRecord consumer record
* @param exception exception
*/
public void sendToDlq(String topic, byte[] key, byte[] value, int partition) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(topic);
kafkaStreamsDlqDispatch.sendToDlq(key, value, partition);
public void sendToDlq(ConsumerRecord<?, ?> consumerRecord, Exception exception) {
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch = this.dlqDispatchers.get(consumerRecord.topic());
kafkaStreamsDlqDispatch.accept(consumerRecord, exception);
}
@Override
@SuppressWarnings("unchecked")
public DeserializationHandlerResponse handle(ProcessorContext context, ConsumerRecord<byte[], byte[]> record, Exception exception) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(record.topic());
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(), record.partition());
context.commit();
// The following conditional block should be reconsidered when we have a solution for this SO problem:
// https://stackoverflow.com/questions/48470899/kafka-streams-deserialization-handler
// Currently it seems like when deserialization error happens, there is no commits happening and the
// following code will use reflection to get access to the underlying KafkaConsumer.
// It works with Kafka 1.0.0, but there is no guarantee it will work in future versions of kafka as
// we access private fields by name using reflection, but it is a temporary fix.
if (context instanceof ProcessorContextImpl) {
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl) context;
Field task = ReflectionUtils.findField(ProcessorContextImpl.class, "task");
ReflectionUtils.makeAccessible(task);
Object taskField = ReflectionUtils.getField(task, processorContextImpl);
if (taskField.getClass().isAssignableFrom(StreamTask.class)) {
StreamTask streamTask = (StreamTask) taskField;
Field consumer = ReflectionUtils.findField(StreamTask.class, "consumer");
ReflectionUtils.makeAccessible(consumer);
Object kafkaConsumerField = ReflectionUtils.getField(consumer, streamTask);
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)) {
KafkaConsumer kafkaConsumer = (KafkaConsumer) kafkaConsumerField;
final Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>();
TopicPartition tp = new TopicPartition(record.topic(), record.partition());
OffsetAndMetadata oam = new OffsetAndMetadata(record.offset() + 1);
consumedOffsetsAndMetadata.put(tp, oam);
kafkaConsumer.commitSync(consumedOffsetsAndMetadata);
}
}
}
return DeserializationHandlerResponse.CONTINUE;
}
@Override
@SuppressWarnings("unchecked")
public void configure(Map<String, ?> configs) {
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
}
void addKStreamDlqDispatch(String topic, KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch) {
void addKStreamDlqDispatch(String topic,
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch) {
this.dlqDispatchers.put(topic, kafkaStreamsDlqDispatch);
}
@Override
public void accept(ConsumerRecord<?, ?> consumerRecord, Exception e) {
this.dlqDispatchers.get(consumerRecord.topic()).accept(consumerRecord, e);
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,13 +23,13 @@ import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context
* and start them. As each one completes starting, register the associated KafkaStreams
* object into {@link QueryableStoreRegistry}.
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context and
* start them. As each one completes starting, register the associated KafkaStreams object
* into {@link InteractiveQueryService}.
*
* This {@link SmartLifecycle} class ensures that the bean created from it is started very late
* through the bootstrap process by setting the phase value closer to Integer.MAX_VALUE.
* This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* This {@link SmartLifecycle} class ensures that the bean created from it is started very
* late through the bootstrap process by setting the phase value closer to
* Integer.MAX_VALUE. This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* {@link org.springframework.cloud.stream.annotation.StreamListener} method with multiple
* bindings is only started after all the binding phases have completed successfully.
*
@@ -38,14 +38,23 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics;
private final StreamsListener listener;
private volatile boolean running;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
StreamsListener listener) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
this.listener = listener;
}
@Override
@@ -65,10 +74,18 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
public synchronized void start() {
if (!this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans();
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean.getKafkaStreams());
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
if (this.listener != null) {
this.listener.streamsAdded("streams." + n++, streamsBuilderFactoryBean.getKafkaStreams());
}
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
}
this.running = true;
}
@@ -79,21 +96,26 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
}
@Override
public synchronized void stop() {
if (this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.stop();
public synchronized void stop() {
if (this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.stop();
if (this.listener != null) {
this.listener.streamsRemoved("streams." + n++, streamsBuilderFactoryBean.getKafkaStreams());
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
finally {
this.running = false;
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
finally {
this.running = false;
}
}
}
@Override

View File

@@ -0,0 +1,46 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
/**
* Temporary workaround until SK 2.5.3 is available.
*
* @author Gary Russell
* @since 3.0.6
*
*/
interface StreamsListener {
/**
* A new {@link KafkaStreams} was created.
* @param id the streams id (factory bean name).
* @param streams the streams;
*/
default void streamsAdded(String id, KafkaStreams streams) {
}
/**
* An existing {@link KafkaStreams} was removed.
* @param id the streams id (factory bean name).
* @param streams the streams;
*/
default void streamsRemoved(String id, KafkaStreams streams) {
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,12 +24,14 @@ import org.springframework.cloud.stream.annotation.Output;
/**
* Bindable interface for {@link KStream} input and output.
*
* This interface can be used as a bindable interface with {@link org.springframework.cloud.stream.annotation.EnableBinding}
* when both input and output types are single KStream. In other scenarios where multiple types are required, other
* similar bindable interfaces can be created and used. For example, there are cases in which multiple KStreams
* are required on the outbound in the case of KStream branching or multiple input types are required either in the
* form of multiple KStreams and a combination of KStreams and KTables. In those cases, new bindable interfaces compatible
* with the requirements must be created. Here are some examples.
* This interface can be used as a bindable interface with
* {@link org.springframework.cloud.stream.annotation.EnableBinding} when both input and
* output types are single KStream. In other scenarios where multiple types are required,
* other similar bindable interfaces can be created and used. For example, there are cases
* in which multiple KStreams are required on the outbound in the case of KStream
* branching or multiple input types are required either in the form of multiple KStreams
* and a combination of KStreams and KTables. In those cases, new bindable interfaces
* compatible with the requirements must be created. Here are some examples.
*
* <pre class="code">
* interface KStreamBranchProcessor {
@@ -73,7 +75,6 @@ public interface KafkaStreamsProcessor {
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input")
@@ -81,9 +82,9 @@ public interface KafkaStreamsProcessor {
/**
* Output binding.
*
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
@@ -27,21 +26,23 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
/**
* Interface for Kafka Stream state store.
*
* This interface can be used to inject a state store specification into KStream building process so
* that the desired store can be built by StreamBuilder and added to topology for later use by processors.
* This is particularly useful when need to combine stream DSL with low level processor APIs. In those cases,
* if a writable state store is desired in processors, it needs to be created using this annotation.
* Here is the example.
* This interface can be used to inject a state store specification into KStream building
* process so that the desired store can be built by StreamBuilder and added to topology
* for later use by processors. This is particularly useful when need to combine stream
* DSL with low level processor APIs. In those cases, if a writable state store is desired
* in processors, it needs to be created using this annotation. Here is the example.
*
* <pre class="code">
* &#064;StreamListener("input")
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW, size=300000)
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW,
* size=300000)
* public void process(KStream&lt;Object, Product&gt; input) {
* ......
* }
* </pre>
*
* With that, you should be able to read/write this state store in your processor/transformer code.
* With that, you should be able to read/write this state store in your
* processor/transformer code.
*
* <pre class="code">
* new Processor&lt;Object, Product&gt;() {
@@ -64,57 +65,51 @@ public @interface KafkaStreamsStateStore {
/**
* Provides name of the state store.
*
* @return name of state store.
*/
String name() default "";
/**
* State store type.
*
* @return {@link KafkaStreamsStateStoreProperties.StoreType} of state store.
*/
KafkaStreamsStateStoreProperties.StoreType type() default KafkaStreamsStateStoreProperties.StoreType.KEYVALUE;
/**
* Serde used for key.
*
* @return key serde of state store.
*/
String keySerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Serde used for value.
*
* @return value serde of state store.
*/
String valueSerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Length in milli-second of Windowed store window.
*
* @return length in milli-second of window(for windowed store).
*/
long lengthMs() default 0;
/**
* Retention period for Windowed store windows.
*
* @return the maximum period of time in milli-second to keep each window in this store(for windowed store).
* @return the maximum period of time in milli-second to keep each window in this
* store(for windowed store).
*/
long retentionMs() default 0;
/**
* Whether catching is enabled or not.
*
* @return whether caching should be enabled on the created store.
*/
boolean cache() default false;
/**
* Whether logging is enabled or not.
*
* @return whether logging should be enabled on the created store.
*/
boolean logging() default true;
}

View File

@@ -0,0 +1,73 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import java.util.ArrayList;
import java.util.List;
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.boot.actuate.endpoint.annotation.Selector;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.util.StringUtils;
/**
* Actuator endpoint for topology description.
*
* @author Soby Chacko
* @since 3.0.4
*/
@Endpoint(id = "kafkastreamstopology")
public class KafkaStreamsTopologyEndpoint {
/**
* Topology not found message.
*/
public static final String NO_TOPOLOGY_FOUND_MSG = "No topology found for the given application ID";
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public KafkaStreamsTopologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@ReadOperation
public List<String> kafkaStreamsTopologies() {
final List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsRegistry.streamsBuilderFactoryBeans();
final StringBuilder topologyDescription = new StringBuilder();
final List<String> descs = new ArrayList<>();
streamsBuilderFactoryBeans.stream()
.forEach(streamsBuilderFactoryBean ->
descs.add(streamsBuilderFactoryBean.getTopology().describe().toString()));
return descs;
}
@ReadOperation
public String kafkaStreamsTopology(@Selector String applicationId) {
if (!StringUtils.isEmpty(applicationId)) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamsBuilderFactoryBean(applicationId);
if (streamsBuilderFactoryBean != null) {
return streamsBuilderFactoryBean.getTopology().describe().toString();
}
else {
return NO_TOPOLOGY_FOUND_MSG;
}
}
return NO_TOPOLOGY_FOUND_MSG;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import org.springframework.boot.actuate.autoconfigure.endpoint.EndpointAutoConfiguration;
import org.springframework.boot.actuate.autoconfigure.endpoint.condition.ConditionalOnAvailableEndpoint;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.4
*/
@Configuration
@ConditionalOnClass(name = {
"org.springframework.boot.actuate.endpoint.annotation.Endpoint" })
@AutoConfigureAfter({EndpointAutoConfiguration.class, KafkaStreamsBinderSupportAutoConfiguration.class})
public class KafkaStreamsTopologyEndpointAutoConfiguration {
@Bean
@ConditionalOnAvailableEndpoint
public KafkaStreamsTopologyEndpoint topologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
return new KafkaStreamsTopologyEndpoint(kafkaStreamsRegistry);
}
}

View File

@@ -0,0 +1,111 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.factory.BeanFactoryUtils;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
import org.springframework.context.annotation.ConditionContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.util.ClassUtils;
import org.springframework.util.CollectionUtils;
/**
* Custom {@link org.springframework.context.annotation.Condition} that detects the presence
* of java.util.Function|Consumer beans. Used for Kafka Streams function support.
*
* @author Soby Chacko
* @since 2.2.0
*/
public class FunctionDetectorCondition extends SpringBootCondition {
private static final Log LOG = LogFactory.getLog(FunctionDetectorCondition.class);
@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
if (context != null && context.getBeanFactory() != null) {
String[] functionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Function.class, true, false);
String[] consumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Consumer.class, true, false);
String[] biFunctionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiFunction.class, true, false);
String[] biConsumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiConsumer.class, true, false);
List<String> functionComponents = new ArrayList<>();
functionComponents.addAll(Arrays.asList(functionTypes));
functionComponents.addAll(Arrays.asList(consumerTypes));
functionComponents.addAll(Arrays.asList(biFunctionTypes));
functionComponents.addAll(Arrays.asList(biConsumerTypes));
List<String> kafkaStreamsFunctions = pruneFunctionBeansForKafkaStreams(functionComponents, context);
if (!CollectionUtils.isEmpty(kafkaStreamsFunctions)) {
return ConditionOutcome.match("Matched. Function/BiFunction/Consumer beans found");
}
else {
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
}
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
private static List<String> pruneFunctionBeansForKafkaStreams(List<String> strings,
ConditionContext context) {
final List<String> prunedList = new ArrayList<>();
for (String key : strings) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method[] methods = classObj.getMethods();
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
if (kafkaStreamMethod.isPresent()) {
Method method = kafkaStreamMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedList.add(key);
}
}
}
catch (Exception e) {
LOG.error("Function not found: " + key, e);
}
}
return prunedList;
}
}

View File

@@ -0,0 +1,236 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binding.AbstractBindableProxyFactory;
import org.springframework.cloud.stream.binding.BoundTargetHolder;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
/**
* Kafka Streams specific target bindings proxy factory. See {@link AbstractBindableProxyFactory} for more details.
*
* Targets bound by this factory:
*
* {@link KStream}
* {@link KTable}
* {@link GlobalKTable}
*
* This class looks at the Function bean's return signature as {@link ResolvableType} and introspect the individual types,
* binding them on the way.
*
* All types on the {@link ResolvableType} are bound except for KStream[] array types on the outbound, which will be
* deferred for binding at a later stage. The reason for doing that is because in this class, we don't have any way to know
* the actual size in the returned array. That has to wait until the function is invoked and we get a result.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFactory implements InitializingBean, BeanFactoryAware {
@Autowired
private StreamFunctionProperties streamFunctionProperties;
private final ResolvableType type;
private final String functionName;
private BeanFactory beanFactory;
public KafkaStreamsBindableProxyFactory(ResolvableType type, String functionName) {
super(type.getType().getClass());
this.type = type;
this.functionName = functionName;
}
@Override
public void afterPropertiesSet() {
Assert.notEmpty(KafkaStreamsBindableProxyFactory.this.bindingTargetFactories,
"'bindingTargetFactories' cannot be empty");
int resolvableTypeDepthCounter = 0;
ResolvableType argument = this.type.getGeneric(resolvableTypeDepthCounter++);
List<String> inputBindings = buildInputBindings();
Iterator<String> iterator = inputBindings.iterator();
String next = iterator.next();
bindInput(argument, next);
if (this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class))) {
argument = this.type.getGeneric(resolvableTypeDepthCounter++);
next = iterator.next();
bindInput(argument, next);
}
ResolvableType outboundArgument = this.type.getGeneric(resolvableTypeDepthCounter);
while (isAnotherFunctionOrConsumerFound(outboundArgument)) {
//The function is a curried function. We should introspect the partial function chain hierarchy.
argument = outboundArgument.getGeneric(0);
String next1 = iterator.next();
bindInput(argument, next1);
outboundArgument = outboundArgument.getGeneric(1);
}
//Introspect output for binding.
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
outboundArgument.getRawClass().isAssignableFrom(KStream.class))) {
// if the type is array, we need to do a late binding as we don't know the number of
// output bindings at this point in the flow.
List<String> outputBindings = streamFunctionProperties.getOutputBindings(this.functionName);
String outputBinding = null;
if (!CollectionUtils.isEmpty(outputBindings)) {
Iterator<String> outputBindingsIter = outputBindings.iterator();
if (outputBindingsIter.hasNext()) {
outputBinding = outputBindingsIter.next();
}
}
else {
outputBinding = String.format("%s-%s-0", this.functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX);
}
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
new BoundTargetHolder(getBindingTargetFactory(KStream.class)
.createOutput(outputBinding), true));
String outputBinding1 = outputBinding;
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> outputHolders.get(outputBinding1).getBoundTarget());
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
registry.registerBeanDefinition(outputBinding1, rootBeanDefinition1);
}
}
private boolean isAnotherFunctionOrConsumerFound(ResolvableType arg1) {
return arg1 != null && !arg1.isArray() && arg1.getRawClass() != null &&
(arg1.getRawClass().isAssignableFrom(Function.class) || arg1.getRawClass().isAssignableFrom(Consumer.class));
}
/**
* If the application provides the property spring.cloud.stream.function.inputBindings.functionName,
* that gets precedence. Otherwise, use functionName-input or functionName-input-0, functionName-input-1 and so on
* for multiple inputs.
*
* @return an ordered collection of input bindings to use
*/
private List<String> buildInputBindings() {
List<String> inputs = new ArrayList<>();
List<String> inputBindings = streamFunctionProperties.getInputBindings(this.functionName);
if (!CollectionUtils.isEmpty(inputBindings)) {
inputs.addAll(inputBindings);
return inputs;
}
int numberOfInputs = this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
int i = 0;
while (i < numberOfInputs) {
inputs.add(String.format("%s-%s-%d", this.functionName, FunctionConstants.DEFAULT_INPUT_SUFFIX, i++));
}
return inputs;
}
private int getNumberOfInputs() {
int numberOfInputs = 1;
ResolvableType arg1 = this.type.getGeneric(1);
while (isAnotherFunctionOrConsumerFound(arg1)) {
arg1 = arg1.getGeneric(1);
numberOfInputs++;
}
return numberOfInputs;
}
private void bindInput(ResolvableType arg0, String inputName) {
if (arg0.getRawClass() != null) {
KafkaStreamsBindableProxyFactory.this.inputHolders.put(inputName,
new BoundTargetHolder(getBindingTargetFactory(arg0.getRawClass())
.createInput(inputName), true));
}
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition();
rootBeanDefinition.setInstanceSupplier(() -> inputHolders.get(inputName).getBoundTarget());
registry.registerBeanDefinition(inputName, rootBeanDefinition);
}
@Override
public Set<String> getInputs() {
Set<String> ins = new LinkedHashSet<>();
this.inputHolders.forEach((s, BoundTargetHolder) -> ins.add(s));
return ins;
}
@Override
public Set<String> getOutputs() {
Set<String> outs = new LinkedHashSet<>();
this.outputHolders.forEach((s, BoundTargetHolder) -> outs.add(s));
return outs;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
public void addOutputBinding(String output, Class<?> clazz) {
KafkaStreamsBindableProxyFactory.this.outputHolders.put(output,
new BoundTargetHolder(getBindingTargetFactory(clazz)
.createOutput(output), true));
}
public String getFunctionName() {
return functionName;
}
public Map<String, BoundTargetHolder> getOutputHolders() {
return outputHolders;
}
}

View File

@@ -0,0 +1,49 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 2.2.0
*/
@Configuration
@EnableConfigurationProperties(StreamFunctionProperties.class)
public class KafkaStreamsFunctionAutoConfiguration {
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessorInvoker kafkaStreamsFunctionProcessorInvoker(
KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
kafkaStreamsFunctionProcessor, kafkaStreamsBindableProxyFactories);
}
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
return new KafkaStreamsFunctionBeanPostProcessor(streamFunctionProperties);
}
}

View File

@@ -0,0 +1,142 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.ClassUtils;
/**
*
* @author Soby Chacko
* @since 2.2.0
*
*/
public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionBeanPostProcessor.class);
private static final String[] EXCLUDE_FUNCTIONS = new String[]{"functionRouter", "sendToDlqAndContinue"};
private ConfigurableListableBeanFactory beanFactory;
private boolean onlySingleFunction;
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
private final StreamFunctionProperties streamFunctionProperties;
public KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
this.streamFunctionProperties = streamFunctionProperties;
}
public Map<String, ResolvableType> getResolvableTypes() {
return this.resolvableTypeMap;
}
@Override
public void afterPropertiesSet() {
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
String[] biFunctionNames = this.beanFactory.getBeanNamesForType(BiFunction.class);
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
String[] biConsumerNames = this.beanFactory.getBeanNamesForType(BiConsumer.class);
final Stream<String> concat = Stream.concat(
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)),
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
final List<String> collect = concat.collect(Collectors.toList());
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
onlySingleFunction = collect.size() == 1;
collect.stream()
.forEach(this::extractResolvableTypes);
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
for (String s : getResolvableTypes().keySet()) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(getResolvableTypes().get(s));
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(s);
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
}
}
private void extractResolvableTypes(String key) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
this.beanFactory.getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method[] methods = classObj.getMethods();
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
if (kafkaStreamMethod.isPresent()) {
Method method = kafkaStreamMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
if (onlySingleFunction) {
resolvableTypeMap.put(key, resolvableType);
}
else {
final String definition = streamFunctionProperties.getDefinition();
if (definition == null) {
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
}
else if (definition.contains(key)) {
resolvableTypeMap.put(key, resolvableType);
}
}
}
}
}
catch (Exception e) {
LOG.error("Function activation issues while mapping the function: " + key, e);
}
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = (ConfigurableListableBeanFactory) beanFactory;
}
}

View File

@@ -0,0 +1,55 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Map;
import java.util.Optional;
import javax.annotation.PostConstruct;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.core.ResolvableType;
/**
*
* @author Soby Chacko
* @since 2.1.0
*/
public class KafkaStreamsFunctionProcessorInvoker {
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
private final Map<String, ResolvableType> resolvableTypeMap;
private final KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories;
public KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
this.resolvableTypeMap = resolvableTypeMap;
this.kafkaStreamsBindableProxyFactories = kafkaStreamsBindableProxyFactories;
}
@PostConstruct
void invoke() {
resolvableTypeMap.forEach((key, value) -> {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(key)).findFirst();
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key, proxyFactory.get());
});
}
}

View File

@@ -1,67 +0,0 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
/**
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications. This class provides
* convenient ways to access the commonly used kafka stream properties from the user application. For example, windowing
* operations are common use cases in stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
public class KafkaStreamsApplicationSupportProperties {
private TimeWindow timeWindow;
public TimeWindow getTimeWindow() {
return this.timeWindow;
}
public void setTimeWindow(TimeWindow timeWindow) {
this.timeWindow = timeWindow;
}
/**
* Properties required by time windows.
*/
public static class TimeWindow {
private int length;
private int advanceBy;
public int getLength() {
return this.length;
}
public void setLength(int length) {
this.length = length;
}
public int getAdvanceBy() {
return this.advanceBy;
}
public void setAdvanceBy(int advanceBy) {
this.advanceBy = advanceBy;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,8 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Kafka Streams binder configuration properties.
@@ -25,7 +29,8 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfi
* @author Soby Chacko
* @author Gary Russell
*/
public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfigurationProperties {
public class KafkaStreamsBinderConfigurationProperties
extends KafkaBinderConfigurationProperties {
public KafkaStreamsBinderConfigurationProperties(KafkaProperties kafkaProperties) {
super(kafkaProperties);
@@ -33,8 +38,12 @@ public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfig
/**
* Enumeration for various Serde errors.
*
* @deprecated in favor of {@link DeserializationExceptionHandler}.
*/
@Deprecated
public enum SerdeError {
/**
* Deserialization error handler with log and continue.
*/
@@ -47,10 +56,41 @@ public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfig
* Deserialization error handler with DLQ send.
*/
sendToDlq
}
private String applicationId;
private StateStoreRetry stateStoreRetry = new StateStoreRetry();
private Map<String, Functions> functions = new HashMap<>();
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a deserialization exception. This handler will be applied against all input bindings
* unless overridden at the consumer binding.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
public Map<String, Functions> getFunctions() {
return functions;
}
public void setFunctions(Map<String, Functions> functions) {
this.functions = functions;
}
public StateStoreRetry getStateStoreRetry() {
return stateStoreRetry;
}
public void setStateStoreRetry(StateStoreRetry stateStoreRetry) {
this.stateStoreRetry = stateStoreRetry;
}
public String getApplicationId() {
return this.applicationId;
}
@@ -59,19 +99,84 @@ public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfig
this.applicationId = applicationId;
}
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use
* when there is a Serde error. {@link KafkaStreamsBinderConfigurationProperties.SerdeError}
* values are used to provide the exception handler on consumer binding.
*/
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
@Deprecated
public KafkaStreamsBinderConfigurationProperties.SerdeError getSerdeError() {
return this.serdeError;
}
public void setSerdeError(KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
this.serdeError = serdeError;
@Deprecated
public void setSerdeError(
KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
this.serdeError = serdeError;
if (serdeError == SerdeError.logAndContinue) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndContinue;
}
else if (serdeError == SerdeError.logAndFail) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndFail;
}
else if (serdeError == SerdeError.sendToDlq) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.sendToDlq;
}
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public static class StateStoreRetry {
private int maxAttempts = 1;
private long backoffPeriod = 1000;
public int getMaxAttempts() {
return maxAttempts;
}
public void setMaxAttempts(int maxAttempts) {
this.maxAttempts = maxAttempts;
}
public long getBackoffPeriod() {
return backoffPeriod;
}
public void setBackoffPeriod(long backoffPeriod) {
this.backoffPeriod = backoffPeriod;
}
}
public static class Functions {
/**
* Function specific application id.
*/
private String applicationId;
/**
* Funcion specific configuraiton to use.
*/
private Map<String, String> configuration;
public String getApplicationId() {
return applicationId;
}
public void setApplicationId(String applicationId) {
this.applicationId = applicationId;
}
public Map<String, String> getConfiguration() {
return configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,7 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Extended binding properties holder that delegates to Kafka Streams producer and consumer properties.
* Extended binding properties holder that delegates to Kafka Streams producer and
* consumer properties.
*
* @author Marius Bogoevici
*/
@@ -44,4 +45,5 @@ public class KafkaStreamsBindingProperties implements BinderSpecificPropertiesPr
public void setProducer(KafkaStreamsProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Extended properties for Kafka Streams consumer.
@@ -43,6 +44,28 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
*/
private String materializedAs;
/**
* Per input binding deserialization handler.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
/**
* {@link org.apache.kafka.streams.processor.TimestampExtractor} bean name to use for this consumer.
*/
private String timestampExtractorBeanName;
/**
* Comma separated list of supported event types for this binding.
*/
private String eventTypes;
/**
* Record level header key for event type.
* If the default value is overridden, then that is expected on each record header if eventType based
* routing is enabled on this binding (by setting eventTypes).
*/
private String eventTypeHeaderKey = "event_type";
public String getApplicationId() {
return this.applicationId;
}
@@ -75,4 +98,35 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
this.materializedAs = materializedAs;
}
public String getTimestampExtractorBeanName() {
return timestampExtractorBeanName;
}
public void setTimestampExtractorBeanName(String timestampExtractorBeanName) {
this.timestampExtractorBeanName = timestampExtractorBeanName;
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public String getEventTypes() {
return eventTypes;
}
public void setEventTypes(String eventTypes) {
this.eventTypes = eventTypes;
}
public String getEventTypeHeaderKey() {
return this.eventTypeHeaderKey;
}
public void setEventTypeHeaderKey(String eventTypeHeaderKey) {
this.eventTypeHeaderKey = eventTypeHeaderKey;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,19 +16,26 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.AbstractExtendedBindingProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Kafka streams specific extended binding properties class that extends from {@link AbstractExtendedBindingProperties}.
* Kafka streams specific extended binding properties class that extends from
* {@link AbstractExtendedBindingProperties}.
*
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
public class KafkaStreamsExtendedBindingProperties
extends AbstractExtendedBindingProperties<KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties, KafkaStreamsBindingProperties> {
// @checkstyle:off
extends
AbstractExtendedBindingProperties<KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties, KafkaStreamsBindingProperties> {
// @checkstyle:on
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.streams.default";
@Override
@@ -36,8 +43,14 @@ public class KafkaStreamsExtendedBindingProperties
return DEFAULTS_PREFIX;
}
@Override
public Map<String, KafkaStreamsBindingProperties> getBindings() {
return this.doGetBindings();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaStreamsBindingProperties.class;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -36,6 +36,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
*/
private String valueSerde;
/**
* {@link org.apache.kafka.streams.processor.StreamPartitioner} to be used on Kafka Streams producer.
*/
private String streamPartitionerBeanName;
public String getKeySerde() {
return this.keySerde;
}
@@ -51,4 +56,12 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
public void setValueSerde(String valueSerde) {
this.valueSerde = valueSerde;
}
public String getStreamPartitionerBeanName() {
return this.streamPartitionerBeanName;
}
public void setStreamPartitionerBeanName(String streamPartitionerBeanName) {
this.streamPartitionerBeanName = streamPartitionerBeanName;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
/**
* Properties for Kafka Streams state store.
*
@@ -28,6 +27,7 @@ public class KafkaStreamsStateStoreProperties {
* Enumeration for store type.
*/
public enum StoreType {
/**
* Key value store.
*/
@@ -51,8 +51,8 @@ public class KafkaStreamsStateStoreProperties {
public String toString() {
return this.type;
}
}
}
/**
* Name for this state store.
@@ -94,7 +94,6 @@ public class KafkaStreamsStateStoreProperties {
*/
private boolean loggingDisabled;
public String getName() {
return this.name;
}
@@ -158,4 +157,5 @@ public class KafkaStreamsStateStoreProperties {
public void setLoggingDisabled(boolean loggingDisabled) {
this.loggingDisabled = loggingDisabled;
}
}

View File

@@ -0,0 +1,245 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;
import java.util.PriorityQueue;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.kafka.support.serializer.JsonSerde;
/**
* A convenient {@link Serde} for {@link java.util.Collection} implementations.
*
* Whenever a Kafka Stream application needs to collect data into a container object like
* {@link java.util.Collection}, then this Serde class can be used as a convenience for
* serialization needs. Some examples of where using this may handy is when the application
* needs to do aggregation or reduction operations where it needs to simply hold an
* {@link Iterable} type.
*
* By default, this Serde will use {@link JsonSerde} for serializing the inner objects.
* This can be changed by providing an explicit Serde during creation of this object.
*
* Here is an example of a possible use case:
*
* <pre class="code">
* .aggregate(ArrayList::new,
* (k, v, aggregates) -&gt; {
* aggregates.add(v);
* return aggregates;
* },
* Materialized.&lt;String, Collection&lt;Foo&gt;, WindowStore&lt;Bytes, byte[]&gt;&gt;as(
* "foo-store")
* .withKeySerde(Serdes.String())
* .withValueSerde(new CollectionSerde&lt;&gt;(Foo.class, ArrayList.class)))
* * </pre>
*
* Supported Collection types by this Serde are - {@link java.util.ArrayList}, {@link java.util.LinkedList},
* {@link java.util.PriorityQueue} and {@link java.util.HashSet}. Deserializer will throw an exception
* if any other Collection types are used.
*
* @param <E> type of the underlying object that the collection holds
* @author Soby Chacko
* @since 3.0.0
*/
public class CollectionSerde<E> implements Serde<Collection<E>> {
/**
* Serde used for serializing the inner object.
*/
private final Serde<Collection<E>> inner;
/**
* Type of the collection class. This has to be a class that is
* implementing the {@link java.util.Collection} interface.
*/
private final Class<?> collectionClass;
/**
* Constructor to use when the application wants to specify the type
* of the Serde used for the inner object.
*
* @param serde specify an explicit Serde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Serde<E> serde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
this.inner =
Serdes.serdeFrom(
new CollectionSerializer<>(serde.serializer()),
new CollectionDeserializer<>(serde.deserializer(), collectionsClass));
}
/**
* Constructor to delegate serialization operations for the inner objects
* to {@link JsonSerde}.
*
* @param targetTypeForJsonSerde target type used by the JsonSerde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Class<?> targetTypeForJsonSerde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
try (JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde)) {
this.inner = Serdes.serdeFrom(
new CollectionSerializer<>(jsonSerde.serializer()),
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
}
}
@Override
public Serializer<Collection<E>> serializer() {
return inner.serializer();
}
@Override
public Deserializer<Collection<E>> deserializer() {
return inner.deserializer();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
inner.serializer().configure(configs, isKey);
inner.deserializer().configure(configs, isKey);
}
@Override
public void close() {
inner.serializer().close();
inner.deserializer().close();
}
private static class CollectionSerializer<E> implements Serializer<Collection<E>> {
private Serializer<E> inner;
CollectionSerializer(Serializer<E> inner) {
this.inner = inner;
}
CollectionSerializer() { }
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, Collection<E> collection) {
final int size = collection.size();
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final DataOutputStream dos = new DataOutputStream(baos);
final Iterator<E> iterator = collection.iterator();
try {
dos.writeInt(size);
while (iterator.hasNext()) {
final byte[] bytes = inner.serialize(topic, iterator.next());
dos.writeInt(bytes.length);
dos.write(bytes);
}
}
catch (IOException e) {
throw new RuntimeException("Unable to serialize the provided collection", e);
}
return baos.toByteArray();
}
@Override
public void close() {
inner.close();
}
}
private static class CollectionDeserializer<E> implements Deserializer<Collection<E>> {
private final Deserializer<E> valueDeserializer;
private final Class<?> collectionClass;
CollectionDeserializer(final Deserializer<E> valueDeserializer, Class<?> collectionClass) {
this.valueDeserializer = valueDeserializer;
this.collectionClass = collectionClass;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public Collection<E> deserialize(String topic, byte[] bytes) {
if (bytes == null || bytes.length == 0) {
return null;
}
Collection<E> collection = getCollection();
final DataInputStream dataInputStream = new DataInputStream(new ByteArrayInputStream(bytes));
try {
final int records = dataInputStream.readInt();
for (int i = 0; i < records; i++) {
final byte[] valueBytes = new byte[dataInputStream.readInt()];
final int read = dataInputStream.read(valueBytes);
if (read != -1) {
collection.add(valueDeserializer.deserialize(topic, valueBytes));
}
}
}
catch (IOException e) {
throw new RuntimeException("Unable to deserialize collection", e);
}
return collection;
}
@Override
public void close() {
}
private Collection<E> getCollection() {
Collection<E> collection;
if (this.collectionClass.isAssignableFrom(ArrayList.class)) {
collection = new ArrayList<>();
}
else if (this.collectionClass.isAssignableFrom(HashSet.class)) {
collection = new HashSet<>();
}
else if (this.collectionClass.isAssignableFrom(LinkedList.class)) {
collection = new LinkedList<>();
}
else if (this.collectionClass.isAssignableFrom(PriorityQueue.class)) {
collection = new PriorityQueue<>();
}
else {
throw new IllegalArgumentException("Unsupported collection type - " + this.collectionClass);
}
return collection;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,194 +16,22 @@
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import org.springframework.messaging.converter.CompositeMessageConverter;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s
* from {@link CompositeMessageConverterFactory}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based application,
* the applications can deserialize the incoming Kafka Streams records using the built in
* Avro {@link MessageConverter}. However, this same message conversion approach will not work
* downstream in other operations in the topology for Kafka Streams as some of them needs a
* {@link Serde} instance that can talk to the Spring Cloud Stream provided Schema Registry.
* This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde} implementations currently.
*
* Users of this class must call the {@link CompositeNonNativeSerde#configure(Map, boolean)} method
* to configure the {@link Serde} object. At the very least the configuration map must include a key
* called "valueClass" to indicate the type of the target object for deserialization. If any other
* content type other than JSON is needed (only Avro is available now other than JSON), that needs
* to be included in the configuration map with the key "contentType". For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams application
* that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and typically the applications
* can autowire that bean. This is the expected usage pattern of this class.
*
* @param <T> type of the object to marshall
* This class provides the same functionality as {@link MessageConverterDelegateSerde} and is deprecated.
* It is kept for backward compatibility reasons and will be removed in version 3.1
*
* @author Soby Chacko
* @since 2.1
*
* @deprecated in favor of {@link MessageConverterDelegateSerde}
*/
public class CompositeNonNativeSerde<T> implements Serde<T> {
@Deprecated
public class CompositeNonNativeSerde extends MessageConverterDelegateSerde {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application", "*+" + AVRO_FORMAT);
private final CompositeNonNativeDeserializer<T> compositeNonNativeDeserializer;
private final CompositeNonNativeSerializer<T> compositeNonNativeSerializer;
public CompositeNonNativeSerde(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(compositeMessageConverterFactory);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(compositeMessageConverterFactory);
public CompositeNonNativeSerde(CompositeMessageConverter compositeMessageConverter) {
super(compositeMessageConverter);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.compositeNonNativeDeserializer.configure(configs, isKey);
this.compositeNonNativeSerializer.configure(configs, isKey);
}
@Override
public void close() {
//No-op
}
@Override
public Serializer<T> serializer() {
return this.compositeNonNativeSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.compositeNonNativeDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class CompositeNonNativeDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
CompositeNonNativeDeserializer(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER), "Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class, "Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString()).build();
U messageConverted = (U) this.messageConverter.fromMessage(message, this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
//No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class CompositeNonNativeSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
CompositeNonNativeSerializer(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
//No-op
}
}
}

View File

@@ -0,0 +1,226 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverter}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the
* {@link MessageConverterDelegateSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 3.0
*/
public class MessageConverterDelegateSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final MessageConverterDelegateDeserializer<T> messageConverterDelegateDeserializer;
private final MessageConverterDelegateSerializer<T> messageConverterDelegateSerializer;
public MessageConverterDelegateSerde(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverterDelegateDeserializer = new MessageConverterDelegateDeserializer<>(
compositeMessageConverter);
this.messageConverterDelegateSerializer = new MessageConverterDelegateSerializer<>(
compositeMessageConverter);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.messageConverterDelegateDeserializer.configure(configs, isKey);
this.messageConverterDelegateSerializer.configure(configs, isKey);
}
@Override
public void close() {
// No-op
}
@Override
public Serializer<T> serializer() {
return this.messageConverterDelegateSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.messageConverterDelegateDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class MessageConverterDelegateDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
MessageConverterDelegateDeserializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class MessageConverterDelegateSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
MessageConverterDelegateSerializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
// No-op
}
}
}

View File

@@ -1,5 +1,4 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpointAutoConfiguration

View File

@@ -0,0 +1,190 @@
/*
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsEventTypeRoutingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"foo-1", "foo-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<Integer, Foo> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("test-group-1", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put("value.deserializer", JsonDeserializer.class);
consumerProps.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
DefaultKafkaConsumerFactory<Integer, Foo> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "foo-2");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003 for more context on this test.
@Test
public void testRoutingWorksBasedOnEventTypes() {
SpringApplication app = new SpringApplication(EventTypeRoutingTestConfig.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.bindings.process-in-0.destination=foo-1",
"--spring.cloud.stream.bindings.process-out-0.destination=foo-2",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypes=foo,bar",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-foo-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put("value.serializer", JsonSerializer.class);
DefaultKafkaProducerFactory<Integer, Foo> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, Foo> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foo-1");
Foo foo1 = new Foo();
foo1.setFoo("foo-1");
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("event_type", "foo".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord1 = new ProducerRecord<>("foo-1", 0, 56, foo1, headers);
template.send(producerRecord1);
Foo foo2 = new Foo();
foo2.setFoo("foo-2");
final ProducerRecord<Integer, Foo> producerRecord2 = new ProducerRecord<>("foo-1", 0, 57, foo2);
template.send(producerRecord2);
Foo foo3 = new Foo();
foo3.setFoo("foo-3");
final ProducerRecord<Integer, Foo> producerRecord3 = new ProducerRecord<>("foo-1", 0, 58, foo3, headers);
template.send(producerRecord3);
Foo foo4 = new Foo();
foo4.setFoo("foo-4");
Headers headers1 = new RecordHeaders();
headers1.add(new RecordHeader("event_type", "bar".getBytes()));
final ProducerRecord<Integer, Foo> producerRecord4 = new ProducerRecord<>("foo-1", 0, 59, foo4, headers1);
template.send(producerRecord4);
final ConsumerRecords<Integer, Foo> records = KafkaTestUtils.getRecords(consumer);
assertThat(records.count()).isEqualTo(3);
List<Integer> keys = new ArrayList<>();
List<Foo> values = new ArrayList<>();
records.forEach(integerFooConsumerRecord -> {
keys.add(integerFooConsumerRecord.key());
values.add(integerFooConsumerRecord.value());
});
assertThat(keys).containsExactlyInAnyOrder(56, 58, 59);
assertThat(values).containsExactlyInAnyOrder(foo1, foo3, foo4);
}
finally {
pf.destroy();
}
}
}
@EnableAutoConfiguration
public static class EventTypeRoutingTestConfig {
@Bean
public Function<KStream<Integer, Foo>, KStream<Integer, Foo>> process() {
return input -> input;
}
}
static class Foo {
String foo;
public String getFoo() {
return foo;
}
public void setFoo(String foo) {
this.foo = foo;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Foo foo1 = (Foo) o;
return Objects.equals(foo, foo1.foo);
}
@Override
public int hashCode() {
return Objects.hash(foo);
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,8 +14,9 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
@@ -23,27 +24,33 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.mockito.Mockito;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -54,6 +61,7 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.internal.verification.VerificationModeFactory.times;
/**
* @author Soby Chacko
@@ -62,17 +70,21 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsInteractiveQueryIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@@ -82,6 +94,31 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
consumer.close();
}
@Test
public void testStateStoreRetrievalRetry() {
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
kafkaStreamsRegistry.registerKafkaStreams(mock);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
InteractiveQueryService interactiveQueryService = new InteractiveQueryService(kafkaStreamsRegistry,
binderConfigurationProperties);
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
try {
interactiveQueryService.getQueryableStore("foo", storeType);
}
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
@@ -91,41 +128,73 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-abc",
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidateFoo(context);
} finally {
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-id");
assertThat(cr.value().contains("Count for product with ID 123: 1")).isTrue();
ProductCountApplication.Foo foo = context.getBean(ProductCountApplication.Foo.class);
ProductCountApplication.Foo foo = context
.getBean(ProductCountApplication.Foo.class);
assertThat(foo.getProductStock(123).equals(1L));
//perform assertions on HostInfo related methods in InteractiveQueryService
InteractiveQueryService interactiveQueryService = context.getBean(InteractiveQueryService.class);
// perform assertions on HostInfo related methods in InteractiveQueryService
InteractiveQueryService interactiveQueryService = context
.getBean(InteractiveQueryService.class);
HostInfo currentHostInfo = interactiveQueryService.getCurrentHostInfo();
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port()).isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfo = interactiveQueryService.getHostInfo("prod-id-count-store", 123, new IntegerSerializer());
assertThat(hostInfo.host() + ":" + hostInfo.port()).isEqualTo(embeddedKafka.getBrokersAsString());
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfoFoo = interactiveQueryService.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
final KeyQueryMetadata keyQueryMetadata = interactiveQueryService.getKeyQueryMetadata("prod-id-count-store",
123, new IntegerSerializer());
final HostInfo activeHost = keyQueryMetadata.getActiveHost();
assertThat(activeHost.host() + ":" + activeHost.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
final KafkaStreams kafkaStreams = interactiveQueryService.getKafkaStreams("prod-id-count-store",
123, new IntegerSerializer());
assertThat(kafkaStreams).isNotNull();
assertThat(interactiveQueryService.getKafkaStreams("non-existent-store",
123, new IntegerSerializer())).isNull();
HostInfo hostInfo = interactiveQueryService.getHostInfo("prod-id-count-store",
123, new IntegerSerializer());
assertThat(hostInfo.host() + ":" + hostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfoFoo = interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
assertThat(hostInfoFoo).isNull();
final List<HostInfo> hostInfos = interactiveQueryService.getAllHostsInfo("prod-id-count-store");
assertThat(hostInfos.size()).isEqualTo(1);
final HostInfo hostInfo1 = hostInfos.get(0);
assertThat(hostInfo1.host() + ":" + hostInfo1.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -134,16 +203,15 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
@StreamListener("input")
@SendTo("output")
@SuppressWarnings("deprecation")
public KStream<?, String> process(KStream<Object, Product> input) {
return input
.filter((key, product) -> product.getId() == 123)
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(), new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store"))
.toStream()
.map((key, value) -> new KeyValue<>(null, "Count for product with ID 123: " + value));
.groupByKey(Serialized.with(new Serdes.IntegerSerde(),
new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(null,
"Count for product with ID 123: " + value));
}
@Bean
@@ -152,6 +220,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
static class Foo {
InteractiveQueryService interactiveQueryService;
Foo(InteractiveQueryService interactiveQueryService) {
@@ -159,11 +228,13 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
public Long getProductStock(Integer id) {
ReadOnlyKeyValueStore<Object, Object> keyValueStore =
interactiveQueryService.getQueryableStore("prod-id-count-store", QueryableStoreTypes.keyValueStore());
ReadOnlyKeyValueStore<Object, Object> keyValueStore = interactiveQueryService
.getQueryableStore("prod-id-count-store",
QueryableStoreTypes.keyValueStore());
return (Long) keyValueStore.get(id);
}
}
}
@@ -178,5 +249,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -0,0 +1,248 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Field;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import org.springframework.util.ReflectionUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class MultipleFunctionsInSameAppTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"coffee", "electronics");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private static CountDownLatch countDownLatch = new CountDownLatch(2);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("purchase-groups", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "coffee", "electronics");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
@SuppressWarnings("unchecked")
public void testMultiFunctionsInSameApp() throws InterruptedException {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze;anotherProcess;yetAnotherProcess",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.applicationId=analyze-id-0",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.bindings.analyze-in-0.consumer.concurrency=1",
"--spring.cloud.stream.kafka.streams.binder.configuration.num.stream.threads=3",
"--spring.cloud.stream.kafka.streams.binder.functions.process.configuration.client.id=process-client",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.configuration.client.id=analyze-client",
"--spring.cloud.stream.kafka.streams.binder.functions.anotherProcess.configuration.client.id=anotherProcess-client",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean anotherProcessStreamsBuilderFactoryBean = context
.getBean("&stream-builder-anotherProcess", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties anotherProcessStreamsConfiguration = anotherProcessStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
Integer concurrency = (Integer) processStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
concurrency = (Integer) analyzeStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(1);
assertThat(anotherProcessStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG)).isEqualTo("3");
final KafkaStreamsBindingInformationCatalogue catalogue = context.getBean(KafkaStreamsBindingInformationCatalogue.class);
Field field = ReflectionUtils.findField(KafkaStreamsBindingInformationCatalogue.class, "outboundKStreamResolvables", Map.class);
ReflectionUtils.makeAccessible(field);
final Map<Object, ResolvableType> outboundKStreamResolvables = (Map<Object, ResolvableType>) ReflectionUtils.getField(field, catalogue);
// Since we have 2 functions with return types -- one is an array return type with 2 bindings -- assert that
// the catalogue contains outbound type information for all the 3 different bindings.
assertThat(outboundKStreamResolvables.size()).isEqualTo(3);
}
}
@Test
public void testMultiFunctionsInSameAppWithMultiBinders() throws Exception {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.startOffset=latest",
"--spring.cloud.stream.bindings.process-in-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.process-out-1.binder=kafka1",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-0.binder=kafka2",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-1.binder=kafka2",
"--spring.cloud.stream.bindings.analyze-in-0.consumer.concurrency=2",
"--spring.cloud.stream.binders.kafka1.type=kstream",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-1",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=process-client",
"--spring.cloud.stream.binders.kafka2.type=kstream",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-2",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=analyze-client")) {
Thread.sleep(1000);
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-1");
assertThat(analyzeStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-2");
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
Integer concurrency = (Integer) analyzeStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
concurrency = (Integer) processStreamsConfiguration.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isNull(); //thus default to 1 by Kafka Streams.
}
}
private void receiveAndValidate(String in, String... out) throws InterruptedException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic(in);
template.sendDefault("coffee");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out[0]);
assertThat(cr.value().contains("coffee")).isTrue();
template.sendDefault("electronics");
cr = KafkaTestUtils.getSingleRecord(consumer, out[1]);
assertThat(cr.value().contains("electronics")).isTrue();
Assert.isTrue(countDownLatch.await(5, TimeUnit.SECONDS), "Analyze (BiConsumer) method didn't receive all the expected records");
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class MultipleFunctionsInSameApp {
@Bean
public Function<KStream<String, String>, KStream<String, String>[]> process() {
return input -> input.branch(
(s, p) -> p.equalsIgnoreCase("coffee"),
(s, p) -> p.equalsIgnoreCase("electronics"));
}
@Bean
public Function<KStream<String, String>, KStream<String, Long>> yetAnotherProcess() {
return input -> input.map((k, v) -> new KeyValue<>("foo", 1L));
}
@Bean
public BiConsumer<KStream<String, String>, KStream<String, String>> analyze() {
return (coffee, electronics) -> {
coffee.foreach((s, p) -> countDownLatch.countDown());
electronics.foreach((s, p) -> countDownLatch.countDown());
};
}
@Bean
public java.util.function.Consumer<KStream<String, String>> anotherProcess() {
return c -> {
};
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,9 @@
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.junit.ClassRule;
import org.junit.Test;
@@ -27,7 +29,7 @@ import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
/**
* @author Soby Chacko
@@ -35,46 +37,95 @@ import org.springframework.kafka.test.rule.KafkaEmbedded;
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKafkaStreamsBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.bindings.input.binder=kBind1",
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
public void testKStreamBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input-3.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foobar",
"--spring.cloud.stream.bindings.input-1.destination=foo",
"--spring.cloud.stream.bindings.input-1.binder=kstreamBinder",
"--spring.cloud.stream.binders.kstreamBinder.type=kstream",
"--spring.cloud.stream.binders.kstreamBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input-2.destination=bar",
"--spring.cloud.stream.bindings.input-2.binder=ktableBinder",
"--spring.cloud.stream.binders.ktableBinder.type=ktable",
"--spring.cloud.stream.binders.ktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input-3.destination=foobar",
"--spring.cloud.stream.bindings.input-3.binder=globalktableBinder",
"--spring.cloud.stream.binders.globalktableBinder.type=globalktable",
"--spring.cloud.stream.binders.globalktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@Test
public void testKafkaStreamsBinderWithStandardConfigurationCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input-3.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@SpringBootApplication
@EnableBinding(StreamSourceProcessor.class)
static class SimpleApplication {
@EnableBinding({SimpleKStreamBinding.class, SimpleKTableBinding.class, SimpleGlobalKTableBinding.class})
static class SimpleKafkaStreamsApplication {
@StreamListener
public void handle(@Input("input") KStream<Object, String> stream) {
public void handle(@Input("input-1") KStream<Object, String> stream) {
}
@StreamListener
public void handleX(@Input("input-2") KTable<Object, String> stream) {
}
@StreamListener
public void handleY(@Input("input-3") GlobalKTable<Object, String> stream) {
}
}
interface StreamSourceProcessor {
@Input("input")
interface SimpleKStreamBinding {
@Input("input-1")
KStream<?, ?> inputStream();
}
interface SimpleKTableBinding {
@Input("input-2")
KTable<?, ?> inputStream();
}
interface SimpleGlobalKTableBinding {
@Input("input-3")
GlobalKTable<?, ?> inputStream();
}
}

View File

@@ -0,0 +1,201 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.function.bindings.process-out-0=output1",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.function.bindings.process-out-1=output2",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.function.bindings.process-out-2=output3",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.applicationId" +
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings("unchecked")
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
}

View File

@@ -0,0 +1,298 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1", "counts-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private final static CountDownLatch LATCH = new CountDownLatch(1);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
@SuppressWarnings("unchecked")
public void testKstreamWordCountFunction() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.process-out-0.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountFunction",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.consumerProperties.request.timeout.ms=29000", //for testing ...binder.consumerProperties
"--spring.cloud.stream.kafka.streams.binder.producerProperties.max.block.ms=90000", //for testing ...binder.producerProperties
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
final MeterRegistry meterRegistry = context.getBean(MeterRegistry.class);
Thread.sleep(100);
assertThat(meterRegistry.getMeters().size() > 1).isTrue();
Assert.isTrue(LATCH.await(5, TimeUnit.SECONDS), "Failed to call customizers");
//Testing topology endpoint
final KafkaStreamsRegistry kafkaStreamsRegistry = context.getBean(KafkaStreamsRegistry.class);
final KafkaStreamsTopologyEndpoint kafkaStreamsTopologyEndpoint = new KafkaStreamsTopologyEndpoint(kafkaStreamsRegistry);
final List<String> topologies = kafkaStreamsTopologyEndpoint.kafkaStreamsTopologies();
final String topology1 = topologies.get(0);
final String topology2 = kafkaStreamsTopologyEndpoint.kafkaStreamsTopology("testKstreamWordCountFunction");
assertThat(topology1).isNotEmpty();
assertThat(topology1).isEqualTo(topology2);
//verify that ...binder.consumerProperties and ...binder.producerProperties work.
Map<String, Object> streamConfigGlobalProperties = (Map<String, Object>) context.getBean("streamConfigGlobalProperties");
assertThat(streamConfigGlobalProperties.get("request.timeout.ms")).isEqualTo("29000");
assertThat(streamConfigGlobalProperties.get("max.block.ms")).isEqualTo("90000");
}
}
@Test
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
}
}
@Test
public void testKstreamWordCountFunctionWithCustomProducerStreamPartitioner() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.streamPartitionerBeanName" +
"=streamPartitioner",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-2");
template.sendDefault("foo");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"foo\",\"count\":1")).isTrue();
assertThat(cr.partition() == 0) .isTrue();
template.sendDefault("bar");
cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"bar\",\"count\":1")).isTrue();
assertThat(cr.partition() == 1) .isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic(in);
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out);
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
pf.destroy();
}
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@Autowired
InteractiveQueryService interactiveQueryService;
@Bean
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean
public StreamsBuilderFactoryBeanCustomizer customizer() {
return fb -> {
try {
fb.setStateListener((newState, oldState) -> {
});
fb.getObject(); //make sure no exception is thrown at this call.
KafkaStreamsBinderWordCountFunctionTests.LATCH.countDown();
}
catch (Exception e) {
//Nothing to do - When the exception is thrown above, the latch won't be count down.
}
};
}
@Bean
public StreamPartitioner<String, WordCount> streamPartitioner() {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
}

View File

@@ -0,0 +1,187 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.apache.kafka.streams.state.WindowStore;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsFunctionStateStoreTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKafkaStreamsFuncionWithMultipleStateStores() throws Exception {
SpringApplication app = new SpringApplication(StateStoreTestApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;hello",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.hello-in-0.destination=words",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-123",
"--spring.cloud.stream.kafka.streams.binder.functions.hello.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-456",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault(1, "foobar");
Thread.sleep(2000L);
StateStoreTestApplication processorApplication = context
.getBean(StateStoreTestApplication.class);
KeyValueStore<Long, Long> state1 = processorApplication.state1;
assertThat(processorApplication.processed1).isTrue();
assertThat(state1 != null).isTrue();
assertThat(state1.name()).isEqualTo("my-store");
WindowStore<Long, Long> state2 = processorApplication.state2;
assertThat(state2 != null).isTrue();
assertThat(state2.name()).isEqualTo("other-store");
assertThat(state2.persistent()).isTrue();
KeyValueStore<Long, Long> state3 = processorApplication.state1;
assertThat(processorApplication.processed2).isTrue();
assertThat(state3 != null).isTrue();
assertThat(state3.name()).isEqualTo("my-store");
WindowStore<Long, Long> state4 = processorApplication.state2;
assertThat(state4 != null).isTrue();
assertThat(state4.name()).isEqualTo("other-store");
assertThat(state4.persistent()).isTrue();
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class StateStoreTestApplication {
KeyValueStore<Long, Long> state1;
WindowStore<Long, Long> state2;
KeyValueStore<Long, Long> state3;
WindowStore<Long, Long> state4;
boolean processed1;
boolean processed2;
@Bean
public java.util.function.BiConsumer<KStream<Object, String>, KStream<Object, String>> process() {
return (input0, input1) ->
input0.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
state1 = (KeyValueStore<Long, Long>) context.getStateStore("my-store");
state2 = (WindowStore<Long, Long>) context.getStateStore("other-store");
}
@Override
public void process(Object key, String value) {
processed1 = true;
}
@Override
public void close() {
}
}, "my-store", "other-store");
}
@Bean
public java.util.function.Consumer<KTable<Object, String>> hello() {
return input -> {
input.toStream().process(() -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
state3 = (KeyValueStore<Long, Long>) context.getStateStore("my-store");
state4 = (WindowStore<Long, Long>) context.getStateStore("other-store");
}
@Override
public void process(Object key, String value) {
processed2 = true;
}
@Override
public void close() {
}
}, "my-store", "other-store");
};
}
@Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.Long(),
Serdes.Long());
}
@Bean
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
Duration.ofSeconds(3), Duration.ofSeconds(3), false), Serdes.Long(),
Serdes.Long());
}
}
}

Some files were not shown because too many files have changed in this diff Show More