Compare commits

..

106 Commits

Author SHA1 Message Date
buildmaster
df8d151878 Update SNAPSHOT to 3.0.0.M1 2019-06-11 12:07:03 +00:00
Oleg Zhurakousky
d96dc8361b Bumped s-c-build to 2.2.0.M2 2019-06-11 13:47:29 +02:00
Oleg Zhurakousky
94be206651 Updated POMs for 3.0.0.M1 release 2019-06-10 14:49:49 +02:00
iguissouma
4ab6432f23 Use try with resources when creating AdminClient
Use try with resources when creating AdminClient to release all associated resources.
Fixes gh-660
2019-06-04 09:37:35 -04:00
Walliee
24b52809ed Switch back to use DefaultKafkaHeaderMapper
* update spring-kafka to v2.2.5
* use DefaultKafkaHeaderMapper instead of BinderHeaderMapper
* delete BinderHeaderMapper

resolves #652
2019-05-31 22:15:12 -04:00
Soby Chacko
7450d0731d Fixing ignored tests from upgrade 2019-05-28 15:02:06 -04:00
Oleg Zhurakousky
08b41f7396 Upgraded master to 3.0.0 2019-05-28 16:42:05 +02:00
Oleg Zhurakousky
e725a172ba Bumping version to 2.3.0-B-S for master 2019-05-20 06:33:35 -05:00
buildmaster
b7a3511375 Bumping versions to 2.2.1.BUILD-SNAPSHOT after release 2019-05-07 12:52:13 +00:00
buildmaster
d6c06286cd Going back to snapshots 2019-05-07 12:52:13 +00:00
buildmaster
321331103b Update SNAPSHOT to 2.2.0.RELEASE 2019-05-07 12:50:54 +00:00
Oleg Zhurakousky
f549ae069c Prep doc POM instructions for release 2019-05-07 13:38:19 +02:00
Oleg Zhurakousky
2cc60a744d Merge pull request #649 from sobychacko/gh-633
Adding docs for Kafka Streams functional support
2019-05-01 21:42:36 +02:00
Oleg Zhurakousky
2ddc837c1a polish
Resolves #648
2019-05-01 21:38:59 +02:00
Soby Chacko
816a4ec232 Kafka stream outbound content type polishing
Using Jackson to write content type as bytes so that the outgoing
content type string is properly json compatible.
2019-05-01 21:29:14 +02:00
Soby Chacko
1a19eeabec Content type not carried on the outbound (Kafka Streams)
Fixing the issue of content type not propagated as a Kafka header
on the outbound during message serialization by Kafka Streams binder.

For more info, see these links:

https://stackoverflow.com/questions/55796503/spring-cloud-stream-kafka-application-not-generating-messages-with-the-correct-a
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/456#issuecomment-439963419

Resolves #456
2019-05-01 21:29:14 +02:00
Soby Chacko
d386ff7923 Make KeyValueSerdeResolver public
Resolves #644
Resolves #645
2019-05-01 21:25:23 +02:00
Soby Chacko
9644f2dbbf Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636
2019-05-01 21:13:33 +02:00
Oleg Zhurakousky
498998f1a4 polishing
Resolves #643
2019-05-01 21:12:40 +02:00
Soby Chacko
5558f7f7fc Custom state stores with functional Kafka Streams
Introducing the ability to provide custom state stores as regular
Spring beans and use them in the functional model of Kafka Streams binding.

Resolves #642
2019-05-01 21:00:47 +02:00
Oleg Zhurakousky
dec9ff696f polishing
Resolves #637
2019-05-01 20:26:52 +02:00
Soby Chacko
5014ab8fe1 Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636

Filter in only Kafka streams function beans
2019-05-01 20:26:52 +02:00
Sabby Anandan
3d3f02b6cc Switch to KafkaStreamsProcessor
It appears we have refactored to rename the class from `KStreamProcessor` to `KafkaStreamsProcessor` [see a5344655cb (diff-4a8582ee2d07e268f77a89c0633e42f5)], but the docs weren't updated. This commit does exactly that.
2019-04-30 13:23:44 -07:00
Soby Chacko
7790d4b196 Addressing PR review comments 2019-04-29 19:29:46 -04:00
Soby Chacko
602aed8a4d Adding docs for Kafka Streams functional support
Resovles #633
2019-04-27 20:46:27 -04:00
Guillaume Lemont
c1f4cf9dc6 Fix message listener container bean naming
With multiplex topics, topics can be an array. Sine message listener container
bean name is created by calling a toString() on the topics, when it is an array
it creates an unusual bean name. Fixing the issue by creating the bean name by
using destination string directly.
2019-04-26 15:11:20 -04:00
buildmaster
c990140452 Going back to snapshots 2019-04-09 18:10:36 +00:00
buildmaster
383add504a Update SNAPSHOT to 2.2.0.RC1 2019-04-09 18:09:51 +00:00
Oleg Zhurakousky
5f9395a5ec Prepared docs for RC1 2019-04-09 19:23:29 +02:00
Soby Chacko
70eb25d413 Fix failing test 2019-04-09 13:02:57 -04:00
Oleg Zhurakousky
6bc74c6e5c Doc changes related to Rabbit's 'GH-193 Added note on default properties' 2019-04-09 18:56:15 +02:00
Soby Chacko
33603c62f0 Transactional binder producer factory
With a transactional binder, the producer factory should not be destroyed.

Resolves #626
2019-04-08 15:27:06 -04:00
Anshul Mehra
efd46835a1 GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration (#562)
* GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration

Resolves #525

* Update warning messages to be more explicit
2019-04-08 12:44:52 -04:00
Oleg Zhurakousky
3a267bc751 Merge pull request #620 from sobychacko/gh-589
Bean name conflicts (Kafka Streams binder)
2019-03-28 18:33:01 +01:00
Soby Chacko
62e98df0c7 Bean name conflicts (Kafka Streams binder)
When two processors with same name are present in the same application,
there is a bean creation conflict. Fixing that issue.

Add test to verify.
Modify existing tests.

Resolves #589
2019-03-27 16:30:22 -04:00
Matthieu Ghilain
9e156911b4 Fixing typo in documentation
Resolves #619
2019-03-26 18:49:49 +01:00
Oleg Zhurakousky
cd28454818 Updated docs pom back to snapshot 2019-03-25 18:46:05 +01:00
Oleg Zhurakousky
172d469faa Updated spring-doc-resources.version 2019-03-25 16:46:25 +01:00
Oleg Zhurakousky
74cb25a56a Prepared docs POM for M1 2019-03-25 15:39:26 +01:00
Oleg Zhurakousky
7dd4b66f58 Upgraded to s-c-build 2.1.4 2019-03-25 15:30:05 +01:00
Oleg Zhurakousky
908fb77a88 Merge pull request #586 from spring-operator/polish-urls-apache-license-master
URL Cleanup
2019-03-25 14:55:11 +01:00
Soby Chacko
5660c7cf76 Listened partitions assertions
Avoid unnecessary assertions on listened parttions when
autorebalancing is enabled, but no listened partitions are found.

Polish concurrency assignment when listened partition are empty

polishing the provisioner

Resolves #512
Resolves #587
2019-03-25 13:40:35 +01:00
Spring Operator
f8c1fb45a6 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://www.apache.org/licenses/ with 1 occurrences migrated to:
  https://www.apache.org/licenses/ ([https](https://www.apache.org/licenses/) result 200).
* [ ] http://www.apache.org/licenses/LICENSE-2.0 with 105 occurrences migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
2019-03-21 13:24:11 -05:00
Oleg Zhurakousky
73d3d79651 Minore cleanup and refactorings after previous commit
Removed KafkaStreamsFunctionProperties in favor of StreamFunctionProperties provided by core module

Resolves #537
2019-03-21 17:06:27 +01:00
Soby Chacko
792705d304 Fixing merge conflicts
Addressing checkstyle issues
Test fixes
Work around for Kafka Streams functions unnecessarily getting wrapped inside a FluxFunction.
2019-03-20 17:25:35 -04:00
Soby Chacko
0a48999e3a Initial implementation for writing Kafka Streams applications using
a functional programming model. Simple Kafka Streams processors can
be written using java.util.Function or java.util.Consumer using this
approach.

For example, beans can be defined like the following:

@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
...
}
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process() {
...
}
@Bean
public Consumer<KStream<String,Long>> process(){
...
}
etc.

Adding tests to verify the new programming model.

Resolves #428
2019-03-20 17:25:35 -04:00
Spring Operator
b7b93c1352 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed But Review Recommended
These URLs were fixed, but the https status was not OK. However, the https status was the same as the http request or http redirected to an https URL, so they were migrated. Your review is recommended.

* [ ] http://compose.docker.io/ (UnknownHostException) with 2 occurrences migrated to:
  https://compose.docker.io/ ([https](https://compose.docker.io/) result UnknownHostException).

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://docs.confluent.io/2.0.0/kafka/security.html with 2 occurrences migrated to:
  https://docs.confluent.io/2.0.0/kafka/security.html ([https](https://docs.confluent.io/2.0.0/kafka/security.html) result 200).
* [ ] http://github.com/ with 3 occurrences migrated to:
  https://github.com/ ([https](https://github.com/) result 200).
* [ ] http://kafka.apache.org/090/documentation.html with 4 occurrences migrated to:
  https://kafka.apache.org/090/documentation.html ([https](https://kafka.apache.org/090/documentation.html) result 200).
* [ ] http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html with 2 occurrences migrated to:
  https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html ([https](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html) result 200).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/) result 301).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/) result 301).
* [ ] http://docs.spring.io/spring-kafka/reference/html/_reference.html with 1 occurrences migrated to:
  https://docs.spring.io/spring-kafka/reference/html/_reference.html ([https](https://docs.spring.io/spring-kafka/reference/html/_reference.html) result 301).
* [ ] http://plugins.jetbrains.com/plugin/6546 with 2 occurrences migrated to:
  https://plugins.jetbrains.com/plugin/6546 ([https](https://plugins.jetbrains.com/plugin/6546) result 301).
* [ ] http://raw.github.com/ with 1 occurrences migrated to:
  https://raw.github.com/ ([https](https://raw.github.com/) result 301).
* [ ] http://eclipse.org with 2 occurrences migrated to:
  https://eclipse.org ([https](https://eclipse.org) result 302).
* [ ] http://eclipse.org/m2e/ with 4 occurrences migrated to:
  https://eclipse.org/m2e/ ([https](https://eclipse.org/m2e/) result 302).
* [ ] http://www.springsource.com/developer/sts with 2 occurrences migrated to:
  https://www.springsource.com/developer/sts ([https](https://www.springsource.com/developer/sts) result 302).
2019-03-20 17:25:12 -04:00
Spring Operator
718cb44de7 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://cloud.spring.io/ with 1 occurrences migrated to:
  https://cloud.spring.io/ ([https](https://cloud.spring.io/) result 200).
* http://cloud.spring.io/spring-cloud-static/ with 1 occurrences migrated to:
  https://cloud.spring.io/spring-cloud-static/ ([https](https://cloud.spring.io/spring-cloud-static/) result 200).
* http://maven.apache.org/xsd/maven-4.0.0.xsd with 6 occurrences migrated to:
  https://maven.apache.org/xsd/maven-4.0.0.xsd ([https](https://maven.apache.org/xsd/maven-4.0.0.xsd) result 200).
* http://repo.spring.io/libs-milestone-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-snapshot-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release with 1 occurrences migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0 with 12 occurrences
* http://www.w3.org/2001/XMLSchema-instance with 6 occurrences
2019-03-20 09:47:51 -04:00
Soby Chacko
de78bfa00b Remove unused setter
Removing an integer version of required acks from KafkaBinderConfigurationProperties

Resolves #558
2019-03-15 15:02:46 -04:00
Oleg Zhurakousky
27bb33f9e3 adjusting for home.html 2019-03-14 20:25:46 +01:00
Oleg Zhurakousky
ecd8cc587c polishing docs 2019-03-14 20:14:02 +01:00
Oleg Zhurakousky
f506da758f Upgraded doc resources version 2019-03-14 19:45:47 +01:00
Oleg Zhurakousky
12a528fd88 Polishing docs styles 2019-03-14 19:19:42 +01:00
Spring Operator
4b138c4a2f URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch migrated to:
  https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch ([https](https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch) result 200).
* http://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin migrated to:
  https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin ([https](https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin) result 200).
* http://www.apache.org/licenses/LICENSE-2.0 migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
* http://projects.spring.io/spring-cloud migrated to:
  https://projects.spring.io/spring-cloud ([https](https://projects.spring.io/spring-cloud) result 301).
* http://www.spring.io migrated to:
  https://www.spring.io ([https](https://www.spring.io) result 301).
* http://repo.spring.io/libs-milestone-local migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-release-local migrated to:
  https://repo.spring.io/libs-release-local ([https](https://repo.spring.io/libs-release-local) result 302).
* http://repo.spring.io/libs-snapshot-local migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0
* http://maven.apache.org/xsd/maven-4.0.0.xsd
* http://www.w3.org/2001/XMLSchema-instance
2019-03-13 18:38:53 -04:00
Oleg Zhurakousky
0af784aaa9 GH-559 Initial migration of docs to new style
Resolves #559
2019-03-13 16:40:45 +01:00
Soby Chacko
0deb8754bf Remove producer partitioned property
On the producer side, partitioned is a derived property and the applications do not
have to set this explicitly. Remove any explicit references to it from the docs.

Fixes #542
2019-03-05 13:49:23 -05:00
Soby Chacko
95a4681d27 KTable input validation
KTable as input binding doesn't work if @input is not specified at parameter level.
Restructuring the input validation in Kafka Streams binder where it checks for declarative
inputs.

Fixes #536
2019-03-04 16:20:18 -05:00
Soby Chacko
b4a2950acd KafkaStreamsStateStore with multiple input bindings
When KafkaStreamsStateStore annotation is used on a method with multiple input bindings,
it throws an exception. The reason is that each successive input binding after the first one
is trying to recreate the store that is already created. Fixing this issue.

Resolves #551
2019-02-28 16:31:38 -05:00
Gary Russell
c5c81f8148 SGH-1616: Add MessageSourceCustomizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1616
2019-02-28 13:44:14 -05:00
Arnaud Jardiné
d80e66d9b8 Actuator health for Kafka-streams binder
Add documentation about health indicator

Fix failing tests + add tests for multiple Kafka streams

Polishing

Resolves #544
2019-02-14 16:26:56 -05:00
Soby Chacko
86c9704ef4 Fix metrics with multi binders
Kafka binder metrics is broken with a multi-binder configuraiton.
Fixing the issues by propagating the MeterRegistry bean into the
binder context from parent.

Adding test to verify.

Removing the formatter plugin from the parent pom.

Resolves #546
Resolves #549
2019-02-14 09:25:11 +01:00
Gary Russell
63a4acda1b Fix test for SK 2.2.4
Override deserializer getters in test consumer factory because the
defaults incorrectly throw an `UnsupportedOperationException`.

See https://github.com/spring-projects/spring-kafka/pull/963
2019-02-13 16:28:13 -05:00
Gary Russell
31a6f5d7e3 GH-531: Fix test
- resolve conflict with another test that used `EnableBinding(Sink.class)`.
2019-02-07 12:44:32 -05:00
Aldo Sinanaj
4d02c38f70 GH-531: Support tombstones on input/output
GH-531: Set test timeout to 10 seconds

GH-531: Polishing - Fix @StreamListener

Added test for `@StreamListener`.
2019-02-07 09:01:54 -05:00
Oleg Zhurakousky
c22c08e259 GH-1601 satellite changes to the core 2019-02-05 07:08:48 +01:00
Soby Chacko
02e1fec3b4 Checkstyle upgrade
Upgrade checkstyle plugin to use the rules from spring-cloud-build.
Fix all the new checkstyle errors from the new rules.

Resolves #540
2019-02-04 18:11:10 -05:00
Soby Chacko
fb91b2aff9 Temporarily disabling the checkstyle plugin 2019-02-04 12:27:29 -05:00
Oleg Zhurakousky
a881b9a54a Created 2.2.x branch 2019-02-04 10:46:24 +01:00
Aldo Sinanaj
fc92a6b0af GH-389: Topic props: Use .topic instead of .admin
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/389

GH-389: Deprecated [producer/consumer].admin in favor of topic property

GH-389: Fixed comments as suggest by Gary

Polishing - 2 more deprecation warning suppressions
2019-01-29 15:16:38 -05:00
Gary Russell
d65e8ff59d GH-529: Add lz4 and docs for zstd compression
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/529

- Also log exception when getting partition information
2019-01-16 12:49:12 -05:00
buildmaster
9948674f30 Bumping versions to 2.1.1.BUILD-SNAPSHOT after release 2019-01-08 11:53:04 +00:00
buildmaster
cfc1e0e212 Going back to snapshots 2019-01-08 11:53:04 +00:00
buildmaster
4bd206f37f Update SNAPSHOT to 2.1.0.RELEASE 2019-01-08 11:51:00 +00:00
Oleg Zhurakousky
9d1d90e608 Upgraded to spring-kafka 2.2.2 2019-01-07 16:25:23 +01:00
Soby Chacko
0b8a760b5a Fix NPE in Kafka Streams binder
Fixing an NPE when type for the binder is not explicitly provided as a configuration.

Resolves #516
Resolves #524
Polishing
2019-01-03 20:26:10 +01:00
Gary Russell
d63f9e5fa6 GH-521: Fix pollable source client id
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/521

Previously, all pollable message sources got the client id `message.source`.
MBean registration failed with a warning when multiple pollable sources were present.

Use the binding name as the client id by default, overridable using the `client.id`
consumer property.

**cherry-pick to 2.0.x**

Resolves #523
2019-01-02 20:09:49 +01:00
buildmaster
c51d7e0613 Going back to snapshots 2018-12-20 19:44:59 +00:00
buildmaster
7093551a86 Update SNAPSHOT to 2.1.0.RC4 2018-12-20 19:44:23 +00:00
Oleg Zhurakousky
e9f24b00c8 Adjusting for the type conversion fixes in GH-1564 core 2018-12-18 09:36:28 +01:00
buildmaster
ed1589f957 Going back to snapshots 2018-12-13 20:26:31 +00:00
buildmaster
c5f72250b5 Update SNAPSHOT to 2.1.0.RC3 2018-12-13 20:25:50 +00:00
buildmaster
0dc871aff0 Bumping versions 2018-12-13 16:51:07 +00:00
Soby Chacko
67bcd0749a Polishing 2018-12-12 21:34:22 -05:00
Soby Chacko
6001136ec0 Handle errors for non-existing topics (#514)
* Handle errors for non-existing topics

When topic creation is disabled both on the binder and the broker,
the binder currently throws an NPE. Catching this situation and
throw a more graceful error to the user.

Adding tests to verify.

Resolves #513

* Addressing PR review comments
2018-12-12 12:44:42 -05:00
Arnaud
e93904c238 Add logging for deserialization failures. 2018-12-12 10:44:11 -05:00
Soby Chacko
43c8f02bff User header mapper with MimeTypeJsonDeserializer
Temporarily provide a custom HeaderMaper as part of the binder that is
copied from Spring Kafka so that we can preserve backward compatibility
with older producers. When older producers send non String types,
the header mapper in Spring Kafka treats that as MimeType. This change will
use a HeaderMppaer that reinstates the MimeTypeJsonDeserializer.
When we can consume the Spring Kafka version that provides the HeaderMapper
with this fix in it, we will remove this custom version.

Resolves #509
2018-11-30 10:57:41 -05:00
Soby Chacko
64ea989b08 Kafka Streams environment properties changes
When Kafka Streams binder is used in multi binder environments, the properties defined under environment
is not propagated to the auto configuration class. The environment processing only takes place when the
actual binder configuration is instantiated (for example, KStreamConfiguration), and therefore the environment
properties are unavailable during the earlier autoconfiguration. This change makes the environment properties
availble during auto configuration.

Resolves #504
2018-11-28 14:25:15 -05:00
Soby Chacko
525adef3a6 Fixing checkstyle warnings 2018-11-27 16:57:03 -05:00
Soby Chacko
1866feb75f Enable KafkaStreams bootstrap tests
Remove the usage of `ImportBeanDefinitionRegistrar` in Kafka Streams binder
components since the regular use of getBean from the outer context is safe to do so.

Unignore tests

Resolves #501

* Addressing PR review comments
2018-11-27 16:54:31 -05:00
Gary Russell
bf1c366d9b GH-502: Add test for native partitioning
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/502

Before applying the fix for,

https://github.com/spring-cloud/spring-cloud-stream/issues/1531

failed with:

```
org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'test.output'; nested exception is java.lang.IllegalArgumentException: Partition key cannot be null, failedMessage=GenericMessage [payload=byte[3], headers={kafka_partitionId=5, id=3350a823-c876-f7a9-f98b-fdbd2aaa4c12, timestamp=1542823925354}]
...
Caused by: java.lang.IllegalArgumentException: Partition key cannot be null
	at org.springframework.util.Assert.notNull(Assert.java:198)
	at org.springframework.cloud.stream.binder.PartitionHandler.extractKey(PartitionHandler.java:112)
	at org.springframework.cloud.stream.binder.PartitionHandler.determinePartition(PartitionHandler.java:93)
	at org.springframework.cloud.stream.binding.MessageConverterConfigurer$PartitioningInterceptor.preSend(MessageConverterConfigurer.java:381)
	at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:589)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:435)
	... 31 more

```

Resolves #503
2018-11-23 11:44:19 +01:00
Oleg Zhurakousky
81f4a861c5 Fixed Kafka Stream binder docs 2018-11-19 16:30:35 +01:00
buildmaster
662aa71159 Going back to snapshots 2018-11-19 14:49:11 +00:00
buildmaster
8683604004 Update SNAPSHOT to 2.1.0.RC2 2018-11-19 14:48:35 +00:00
Soby Chacko
7232839cdf Ignore JAAS tests as they randomly fail on CI 2018-11-17 12:13:29 -05:00
Oleg Zhurakousky
47443f8b70 Fixed tests related to GH-1527 change in core 2018-11-15 19:33:20 +01:00
Soby Chacko
b589f32933 Checkstyle fixes
Reuse checkstyle components from core spring-cloud-stream-tools module.
Addressing checkstyle warnings in kafka streams binder module.

Resolves #489
2018-11-09 18:11:51 -05:00
Soby Chacko
20af89bc00 Fixing health indicator issues
During startup if Kafka is down, binder health indicator check
erroneously reports that Kafka is up. Fixing this issue.

Resolves #495
2018-11-09 09:22:26 -05:00
Soby Chacko
0b50a6ce2f Address checkstyle warnings
Addressing checkstyle warnings in sprnig-cloud-stream-binder-kafka.

Resolves #487
2018-11-08 10:42:26 -05:00
Gary Russell
3a396c6d37 GH-491: Fix TODO in binder for initial seeking
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/491

Use `seekToBeginning/End` instead of `0` and `Long.MAX_VALUE`.
2018-11-07 16:39:33 -05:00
rahulbats
3e39514003 Add log message for deserialization error handler
Add missing logging statement for the logAndContinue
deserialization error handler in Kafka Streams binder.

Polishing.

Resolves #494
2018-11-07 16:29:25 -05:00
Gary Russell
2144d3e26f GH-453: Add binding rebalance listener
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/453

Fix initial value of initial variable
2018-11-07 14:54:02 -05:00
Soby Chacko
2d4e9a113b Use TopicExistsException in provisioner
Instead of checking for the text TopicExistsException in the exception
message, use strong type check for TopicExistsException through instanceof on
the cause of the exception.

Adding test to verify.

Resolves #209
2018-11-05 16:20:47 -05:00
Soby Chacko
84249727ae Fix boolean expression in resetOffsets
Resolves #481
Resolves #485
2018-11-05 17:43:44 +01:00
Soby Chacko
2fc158003f Address checksyle issues in core module
Address checkstyle errors and warnings in spring-cloud-stream-binder-kafka-core.
Remove a duplicate dependency declaration from parent pom.

Resolves #483
Resolves #484
2018-11-05 17:41:39 +01:00
Soby Chacko
9c20bb4cdc Remove unnecessary auto config import.
Remove the superfluous import of PropertyPlaceholderAutoConfiguration
in KafkaBinderConfiguration.

Resovles #211
2018-11-02 15:02:32 -04:00
Oleg Zhurakousky
8dbed6b877 Going back to snapshots 2018-10-30 15:01:36 +01:00
129 changed files with 10077 additions and 3557 deletions

110
.mvn/wrapper/MavenWrapperDownloader.java vendored Executable file
View File

@@ -0,0 +1,110 @@
/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
import java.util.Properties;
public class MavenWrapperDownloader {
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL =
"https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
* use instead of the default one.
*/
private static final String MAVEN_WRAPPER_PROPERTIES_PATH =
".mvn/wrapper/maven-wrapper.properties";
/**
* Path where the maven-wrapper.jar will be saved to.
*/
private static final String MAVEN_WRAPPER_JAR_PATH =
".mvn/wrapper/maven-wrapper.jar";
/**
* Name of the property which should be used to override the default download url for the wrapper.
*/
private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl";
public static void main(String args[]) {
System.out.println("- Downloader started");
File baseDirectory = new File(args[0]);
System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath());
// If the maven-wrapper.properties exists, read it and check if it contains a custom
// wrapperUrl parameter.
File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH);
String url = DEFAULT_DOWNLOAD_URL;
if(mavenWrapperPropertyFile.exists()) {
FileInputStream mavenWrapperPropertyFileInputStream = null;
try {
mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile);
Properties mavenWrapperProperties = new Properties();
mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream);
url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url);
} catch (IOException e) {
System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'");
} finally {
try {
if(mavenWrapperPropertyFileInputStream != null) {
mavenWrapperPropertyFileInputStream.close();
}
} catch (IOException e) {
// Ignore ...
}
}
}
System.out.println("- Downloading from: : " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if(!outputFile.getParentFile().exists()) {
if(!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output direcrory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
try {
downloadFileFromURL(url, outputFile);
System.out.println("Done");
System.exit(0);
} catch (Throwable e) {
System.out.println("- Error downloading");
e.printStackTrace();
System.exit(1);
}
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
}
}

BIN
.mvn/wrapper/maven-wrapper.jar vendored Normal file → Executable file

Binary file not shown.

2
.mvn/wrapper/maven-wrapper.properties vendored Normal file → Executable file
View File

@@ -1 +1 @@
distributionUrl=https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.3/apache-maven-3.3.3-bin.zip
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip

View File

@@ -21,7 +21,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -29,7 +29,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -37,7 +37,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -47,7 +47,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -55,7 +55,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,6 +1,6 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -192,7 +192,7 @@
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,4 +1,8 @@
// Do not edit this file (e.g. go instead to src/main/asciidoc)
////
DO NOT EDIT THIS FILE. IT WAS GENERATED.
Manual changes to this file will be lost when it is generated again.
Edit the files in the src/main/asciidoc/ directory instead.
////
:jdkversion: 1.8
:github-tag: master
@@ -21,7 +25,9 @@ It contains information about its design, usage, and configuration options, as w
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Usage
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
@@ -43,7 +49,7 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
</dependency>
----
== Apache Kafka Binder Overview
=== Overview
The following image shows a simplified diagram of how the Apache Kafka binder operates:
@@ -59,13 +65,13 @@ This client can communicate with older brokers (see the Kafka documentation), bu
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
== Configuration Options
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
=== Kafka Binder Properties
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
@@ -126,7 +132,7 @@ If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -154,28 +160,22 @@ Use this, for example, if you wish to customize the trusted packages in a `Defau
Default: none.
[[kafka-consumer-properties]]
=== Kafka Consumer Properties
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
@@ -210,6 +210,7 @@ If not set (the default), it effectively has the same value as `enableDlq`, auto
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
+
Default: `false`.
startOffset::
@@ -267,30 +268,39 @@ Note, the time taken to detect new topics that match the pattern is controlled b
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
[[kafka-producer-properties]]
=== Kafka Producer Properties
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See `NewTopic` javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning new topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
@@ -323,6 +333,21 @@ configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -330,11 +355,18 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
=== Usage examples
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
==== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
@@ -362,10 +394,10 @@ public class ManuallyAcknowdledgingConsumer {
}
----
==== Example: Security Configuration
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
@@ -377,11 +409,11 @@ spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
===== Using JAAS Configuration Files
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
@@ -394,7 +426,7 @@ The following example shows how to launch a Spring Cloud Stream application with
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
===== Using Spring Boot Properties
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
@@ -451,7 +483,7 @@ Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
==== Example: Pausing and Resuming the Consumer
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
@@ -491,7 +523,7 @@ public class Application {
----
[[kafka-error-channels]]
== Error Channels
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
@@ -505,7 +537,7 @@ There is no automatic handling of producer exceptions (such as sending to a <<ka
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
== Kafka Metrics
=== Kafka Metrics
Kafka binder module exposes the following metrics:
@@ -514,7 +546,7 @@ The metrics provided are based on the Mircometer metrics library. The metric con
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
[[kafka-tombstones]]
== Tombstone Records (null record values)
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
@@ -531,6 +563,57 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
= Appendices
[appendix]
[[building]]
@@ -569,7 +652,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -578,13 +661,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".
@@ -637,7 +720,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -650,7 +733,7 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -1,13 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
</parent>
<packaging>pom</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
@@ -15,8 +15,11 @@
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<spring-doc-resources.version>0.1.1.RELEASE</spring-doc-resources.version>
<spring-asciidoctor-extensions.version>0.1.0.RELEASE
</spring-asciidoctor-extensions.version>
<asciidoctorj-pdf.version>1.5.0-alpha.16</asciidoctorj-pdf.version>
</properties>
<profiles>
<profile>
<id>docs</id>
@@ -25,21 +28,294 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${maven-dependency-plugin.version}</version>
<inherited>false</inherited>
</plugin>
<plugin>
<groupId>com.agilejava.docbkx</groupId>
<artifactId>docbkx-maven-plugin</artifactId>
<executions>
<execution>
<id>unpack-docs</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-build-docs
</artifactId>
<version>${spring-cloud-build.version}
</version>
<classifier>sources</classifier>
<type>jar</type>
<overWrite>false</overWrite>
<outputDirectory>${docs.resources.dir}
</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
<execution>
<id>unpack-docs-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>io.spring.docresources</groupId>
<artifactId>spring-doc-resources</artifactId>
<version>${spring-doc-resources.version}</version>
<type>zip</type>
<overWrite>true</overWrite>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<inherited>false</inherited>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-asciidoc-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
<resources>
<resource>
<directory>src/main/asciidoc</directory>
<filtering>false</filtering>
<excludes>
<exclude>ghpages.sh</exclude>
</excludes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${asciidoctor-maven-plugin.version}</version>
<inherited>false</inherited>
<dependencies>
<dependency>
<groupId>io.spring.asciidoctor</groupId>
<artifactId>spring-asciidoctor-extensions</artifactId>
<version>${spring-asciidoctor-extensions.version}</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>${asciidoctorj-pdf.version}</version>
</dependency>
</dependencies>
<configuration>
<sourceDirectory>${project.build.directory}/refdocs/</sourceDirectory>
<sourceDocumentName>${docs.main}.adoc</sourceDocumentName>
<attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
<!-- <docs-url>https://cloud.spring.io/</docs-url> -->
<!-- <docs-version></docs-version> -->
<docs-version>${project.version}/</docs-version>
<docs-url>https://cloud.spring.io/spring-cloud-static/</docs-url>
</attributes>
</configuration>
<executions>
<execution>
<id>generate-html-documentation</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html5</backend>
<sourceHighlighter>highlight.js</sourceHighlighter>
<doctype>book</doctype>
<attributes>
// these attributes are required to use the doc resources
<docinfo>shared</docinfo>
<stylesdir>css/</stylesdir>
<stylesheet>spring.css</stylesheet>
<linkcss>true</linkcss>
<icons>font</icons>
<highlightjsdir>js/highlight</highlightjsdir>
<highlightjs-theme>atom-one-dark-reasonable</highlightjs-theme>
<allow-uri-read>true</allow-uri-read>
<nofooter />
<toc>left</toc>
<toc-levels>4</toc-levels>
<spring-cloud-version>${project.version}</spring-cloud-version>
<sectlinks>true</sectlinks>
</attributes>
<outputFile>${docs.main}.html</outputFile>
</configuration>
</execution>
<execution>
<id>generate-docbook</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
<execution>
<id>generate-index</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>${maven-antrun-plugin.version}</version>
<dependencies>
<dependency>
<groupId>ant-contrib</groupId>
<artifactId>ant-contrib</artifactId>
<version>1.0b3</version>
<exclusions>
<exclusion>
<groupId>ant</groupId>
<artifactId>ant</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.tigris.antelope</groupId>
<artifactId>antelopetasks</artifactId>
<version>3.2.10</version>
</dependency>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
<version>1.7.17</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj</artifactId>
<version>1.5.8</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>readme</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<java classname="org.jruby.Main" failonerror="yes">
<arg
value="${docs.resources.dir}/ruby/generate_readme.sh" />
<arg value="-o" />
<arg value="${main.basedir}/README.adoc" />
</java>
</target>
</configuration>
</execution>
<execution>
<id>assert-no-unresolved-links</id>
<phase>prepare-package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<fileset id="unresolved.file"
dir="${basedir}/target/generated-docs/" includes="**/*.html">
<contains text="Unresolved" />
</fileset>
<fail message="[Unresolved] Found...failing">
<condition>
<resourcecount when="greater" count="0"
refid="unresolved.file" />
</condition>
</fail>
</target>
</configuration>
</execution>
<execution>
<id>setup-maven-properties</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<taskdef
resource="net/sf/antcontrib/antcontrib.properties" />
<taskdef name="stringutil"
classname="ise.antelope.tasks.StringUtilTask" />
<var name="version-type" value="${project.version}" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp=".*\.(.*)"
replace="\1" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(M)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(RC)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="BUILD-(.*)"
replace="SNAPSHOT" />
<stringutil string="${version-type}"
property="spring-cloud-repo">
<lowercase />
</stringutil>
<var name="github-tag" value="v${project.version}" />
<propertyregex property="github-tag"
override="true" input="${github-tag}" regexp=".*SNAPSHOT"
replace="master" />
</target>
</configuration>
</execution>
<execution>
<id>copy-css</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>generate-documentation-index</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>copy-generated-html</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>

View File

@@ -34,7 +34,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -43,13 +43,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".

View File

@@ -24,7 +24,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -37,6 +37,6 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -1,5 +1,5 @@
[[kafka-dlq-processing]]
== Dead-Letter Topic Processing
=== Dead-Letter Topic Processing
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
@@ -25,10 +25,8 @@ spring.cloud.stream.bindings.input.group=so8400replay
spring.cloud.stream.bindings.input.destination=error.so8400out.so8400
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.output.producer.partitioned=true
spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot
spring.cloud.stream.bindings.parkingLot.producer.partitioned=true
spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest

View File

@@ -40,7 +40,7 @@ function check_if_anything_to_sync() {
function retrieve_current_branch() {
# Code getting the name of the current branch. For master we want to publish as we did until now
# http://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch
# https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch
# If there is a branch already passed will reuse it - otherwise will try to find it
CURRENT_BRANCH=${BRANCH}
if [[ -z "${CURRENT_BRANCH}" ]] ; then
@@ -147,7 +147,7 @@ function copy_docs_for_current_version() {
COMMIT_CHANGES="yes"
else
echo -e "Current branch is [${CURRENT_BRANCH}]"
# http://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
# https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
if [[ ",${WHITELISTED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
mkdir -p ${ROOT_FOLDER}/${CURRENT_BRANCH}
echo -e "Branch [${CURRENT_BRANCH}] is whitelisted! Will copy the current docs to the [${CURRENT_BRANCH}] folder"

View File

@@ -1,4 +1,7 @@
== Usage
== Kafka Streams Binder
=== Usage
For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following
Maven coordinates:
@@ -11,13 +14,14 @@ Maven coordinates:
</dependency>
----
== Kafka Streams Binder Overview
=== Overview
Spring Cloud Stream's Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka
Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the
https://kafka.apache.org/documentation/streams/developer-guide[Apache Kafka Streams] APIs in the core business logic.
Kafka Streams binder implementation builds on the foundation provided by the http://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafka Streams in Spring Kafka]
Kafka Streams binder implementation builds on the foundation provided by the https://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafka Streams in Spring Kafka]
project.
Kafka Streams binder provides binding capabilities for the three major types in Kafka Streams - KStream, KTable and GlobalKTable.
@@ -32,7 +36,7 @@ As noted early-on, Kafka Streams support in Spring Cloud Stream is strictly only
A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages
can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.
=== Streams DSL
==== Streams DSL
This application consumes data from a Kafka topic (e.g., `words`), computes word count for each unique word in a 5 seconds
time window, and the computed results are sent to a downstream topic (e.g., `counts`) for further processing.
@@ -40,7 +44,7 @@ time window, and the computed results are sent to a downstream topic (e.g., `cou
[source]
----
@SpringBootApplication
@EnableBinding(KStreamProcessor.class)
@EnableBinding(KafkaStreamsProcessor.class)
public class WordCountProcessorApplication {
@StreamListener("input")
@@ -75,13 +79,13 @@ KStream objects. As a developer, you can exclusively focus on the business aspec
required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure
is automatically handled by the framework.
== Configuration Options
=== Configuration Options
This section contains the configuration options used by the Kafka Streams binder.
For common configuration options and properties pertaining to binder, refer to the <<binding-properties,core documentation>>.
=== Kafka Streams Properties
==== Kafka Streams Properties
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
@@ -172,7 +176,7 @@ Default: `earliest`.
Note: Using `resetOffsets` on the consumer does not have any effect on Kafka Streams binder.
Unlike the message channel based binder, Kafka Streams binder does not seek to beginning or end on demand.
=== TimeWindow properties:
==== TimeWindow properties:
Windowing is an important concept in stream processing applications. Following properties are available to configure
time-window computations.
@@ -187,14 +191,14 @@ spring.cloud.stream.kafka.streams.timeWindow.advanceBy::
+
Default: `none`.
== Multiple Input Bindings
=== Multiple Input Bindings
For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka
Streams binder provides multiple bindings support.
Let's see it in action.
=== Multiple Input Bindings as a Sink
==== Multiple Input Bindings as a Sink
[source]
----
@@ -261,7 +265,7 @@ interface KStreamKTableBinding extends KafkaStreamsProcessor {
----
== Multiple Output Bindings (aka Branching)
=== Multiple Output Bindings (aka Branching)
Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides
support for this feature without compromising the programming model exposed through `StreamListener` in the end user application.
@@ -347,7 +351,7 @@ spring.cloud.stream.bindings.input:
headerMode: raw
----
== Record Value Conversion
=== Record Value Conversion
Kafka Streams binder can marshal producer/consumer values based on a content type and the converters provided out of the box in Spring Cloud Stream.
@@ -359,7 +363,7 @@ would like to continue using that for inbound and outbound conversions.
Both the options are supported in the Kafka Streams binder implementation. See below for more details.
==== Outbound serialization
===== Outbound serialization
If native encoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the outbound
@@ -448,7 +452,7 @@ spring.cloud.stream.bindings.output2.contentType: application/java-serialzied-ob
spring.cloud.stream.bindings.output3.contentType: application/octet-stream
----
==== Inbound Deserialization
===== Inbound Deserialization
Similar rules apply to data deserialization on the inbound.
@@ -503,7 +507,7 @@ As in the case of KStream branching on the outbound, the benefit of setting valu
multiple input bindings (multiple KStreams object) and they all require separate value SerDe's, then you can configure
them individually. If you use the common configuration approach, then this feature won't be applicable.
== Error Handling
=== Error Handling
Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors.
For details on this support, please see https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers[this]
@@ -544,7 +548,7 @@ that if there are multiple `StreamListener` methods in the same application, thi
* The exception handling for deserialization works consistently with native deserialization and framework provided message
conversion.
=== Handling Non-Deserialization Exceptions
==== Handling Non-Deserialization Exceptions
For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors.
As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get
@@ -594,7 +598,7 @@ public KStream<?, WordCount> process(KStream<Object, String> input) {
}
----
== State Store
=== State Store
State store is created automatically by Kafka Streams when the DSL is used.
When processor API is used, you need to register a state store manually. In order to do so, you can use `KafkaStreamsStateStore` annotation.
@@ -626,7 +630,7 @@ Processor<Object, Product>() {
}
----
== Interactive Queries
=== Interactive Queries
As part of the public Kafka Streams binder API, we expose a class called `InteractiveQueryService`.
You can access this as a Spring bean in your application. An easy way to get access to this bean from your application is to "autowire" the bean.
@@ -671,7 +675,7 @@ else {
}
----
== Accessing the underlying KafkaStreams object
=== Accessing the underlying KafkaStreams object
`StreamBuilderFactoryBean` from spring-kafka that is responsible for constructing the `KafkaStreams` object can be accessed programmatically.
Each `StreamBuilderFactoryBean` is registered as `stream-builder` and appended with the `StreamListener` method name.
@@ -685,8 +689,274 @@ StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-b
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
----
== State Cleanup
=== State Cleanup
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
=== Health Indicator
The health indicator requires the dependency `spring-boot-starter-actuator`. For maven use:
[source,xml]
----
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
----
Spring Cloud Stream Binder Kafka Streams provides a health indicator to check the state of the underlying Kafka threads.
Spring Cloud Stream defines a property `management.health.binders.enabled` to enable the health indicator. See the
https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_health_indicator[Spring Cloud Stream documentation].
The health indicator provides the following details for each Kafka threads:
* Thread name
* Thread state: `CREATED`, `RUNNING`, `PARTITIONS_REVOKED`, `PARTITIONS_ASSIGNED`, `PENDING_SHUTDOWN` or `DEAD`
* Active tasks: task ID and partitions
* Standby tasks: task ID and partitions
By default, only the global status is visible (`UP` or `DOWN`). To show the details, the property `management.endpoint.health.show-details` must be set to `ALWAYS` or `WHEN_AUTHORIZED`.
For more details about the health information, see the
https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html#production-ready-health[Spring Boot Actuator documentation].
NOTE: The status of the health indicator is `UP` if all the Kafka threads registered are in the `RUNNING` state.
=== Functional Kafka Streams Applications
Starting 2.2.0.RELEASE, Kafka Streams binder supports the ability to write Kafka Streams applications by simply just implementing the java.util.function.Function or java.util.consumer.Consumer interfaces in Java.
In this section, we will see the details of how the functional support work in the binder.
The above `StreamListener` based model can be converted as below.
[source]
----
@SpringBootApplication
@EnableBinding(KafkaStreamsProcessor.class)
public class WordCountProcessorApplication {
@Bean
public Function<KStream<?, String>, KStream<?, WordCount>> process() {
return input ->
input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-multi"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
----
The input will be received from the input binding defined in the `KafkaStreamsProcessor` interface and the output will be sent to the output binding.
In this case, the input is a stream of `String` objects and the output is a stream of `WordCount` objects.
If the processor does not send any data on the outbound, then this becomes a plain Consumer bean as below.
In this example, we are simply receiving some data as a stream of `String` objects (`KStream<?, String>`) and possibly doing terminal operations with that data without sending any outputs.
[source]
----
@Bean
public Consumer<KStream<?, String>> process() {
return input ->
....
}
----
Applications are free to define custom bindings and use that instead of the out of the box `KafkaStreamsProcessor` interface.
==== Functions with multiple input bindings
With `StreamListener`, we define multiple `Input` bindings and then later on use them as inputs in the method.
With the functions approach, we still need to define those bindings in the binding interface. However, it cannot be used in the same way as in a `StreamListener` method.
We use curried functions to represent multiple input destinations in the same processor.
For instance, if a function has 2 inputs, the application define 2 partial functions in the function bean method. Lets see some examples.
[source]
----
@Bean
public Function<KStream<String, Long>,
Function<KTable<String, String>, KStream<String, Long>>> process() {
return userClicksStream ->
(userRegionsTable ->
(userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.toStream()));
}
----
In the above function bean, there are two inputs and one output.
The function that returns from the method takes a `KStream` as input, but if you look at the output of this function,, that is another function
which takes a `KTable` as its input. The output of this second function is a `KStream` which becomes the output of the processor.
Another way to look at this is like the following:
Function 1: Function<KStream<String, Long> - KStream input; returns the output of "Function 2"
Function 2: Function<KTable<String, String>, KStream<String, Long>> - KTable input; returns KStream<String, Long> which becomes the output of the processor.
Both inputs are available as references in the method body and the applications can perform various operations on them.
In this example we use function currying on two partial functions.
One thing to keep in mind is that the input bindings must follow a natural order of sorting when you have multiple input bindings, otherwise the binder won't know which binding to bind for the various function inputs.
Here is the corresponding binding interface for the above processor.
[source]
----
interface KStreamKTableProcessor {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
KTable<?, ?> input2();
@Output("output")
KStream<?, ?> output();
}
----
If you look at the 2 inputs, there is a natural sorting order - i.e. input-1 goes to the first partial function input and input-2 goes to the second partial function.
Here is another example that shows multiple inputs with GlobalKTable.
[source]
----
@Bean
public Function<KStream<Long, Order>,
Function<GlobalKTable<Long, Customer>,
Function<GlobalKTable<Long, Product>, KStream<Long, EnrichedOrder>>>> process() {
return orderStream -> (
customers -> (
products -> (
orderStream.join(customers,
(orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order))
.join(products,
(orderId, customerOrder) -> customerOrder
.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
})
)
)
);
}
----
Here we have 3 inputs. The first function takes a `KStream` and its output is another `Function` that takes a `GlobalKTable` as its input and another function as its output.
This last function takes another `GlobalKTable` as its input and a `KStream` is provided as this function's output which will be used as the processor's output.
Here is a sequential way to conceptualize this:
Function 1: Function<KStream<Long, Order> - KStream input; returns the output of "Function 2"
Function 2: Function<GlobalKTable<Long, Customer>, KStream<String, Long>> - GlobalKTable input; returns the output of "Function 3"
Function 3: Function<GlobalKTable<Long, Product>, KStream<Long, EnrichedOrder>>> - GlobalKTable input; returns KStream which becomes the output of the processor.
In this example, we have three curried functions. Behind the scenes, the binder will call the `apply` method on those functions in the order that they appear.
Here is the corresponding binding interface for this application.
[source]
----
interface CustomGlobalKTableProcessor {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
GlobalKTable<?, ?> input2();
@Input("input-3")
GlobalKTable<?, ?> input3();
@Output("output")
KStream<?, ?> output();
}
----
Here also, the input bindings follow a natural order.
==== Multiple functions in the same application
Multiple functions aan be defined in the same application.
When doing this, the binder will do a natural sorting on multiple function bean names first and then apply input and output bindings on them in the natural order.
Consider the following two function beans in the same application.
[source]
----
@Bean
public Function<KStream<?, String>, KStream<?, WordCount>> process1() {
}
@Bean
public Function<KStream<?, String>, KStream<?, WordCount>> process2() {
}
----
Consider also the following binding interface.
[source]
----
interface Bindings {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
KStream<?, ?> input2();
@Ouput("output-1")
KStream<?, ?> output1();
@Output("output-2")
KStream<?, ?> output1();
}
----
Binder will first take the method `process1` and use input binding `input-1` and output binding `output-1`.
Similarly, for the method `process2`, it will use input binding `input-2` and output binding `output-2`.
==== Using custom state stores in functional applications
You can define custom state stores as beans in your application and those will be detected and added to the Kafka Streams builder by the binder.
Note that, for regular StreamListener based processors, you still need to use the `KafkaStreamsStateStore` annotation for custom state stores.
Here is an example of using custom state stores with functional style described in this section.
[source]
----
@Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.Long(),
Serdes.Long());
}
@Bean
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
1L, 3, 3L, false), Serdes.Long(),
Serdes.Long());
}
----
These state stores can be then accessed by the applications directly.

View File

@@ -5,7 +5,9 @@ It contains information about its design, usage, and configuration options, as w
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Usage
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
@@ -27,7 +29,7 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
</dependency>
----
== Apache Kafka Binder Overview
=== Overview
The following image shows a simplified diagram of how the Apache Kafka binder operates:
@@ -43,13 +45,13 @@ This client can communicate with older brokers (see the Kafka documentation), bu
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
== Configuration Options
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
=== Kafka Binder Properties
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
@@ -110,7 +112,7 @@ If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -138,28 +140,22 @@ Use this, for example, if you wish to customize the trusted packages in a `Defau
Default: none.
[[kafka-consumer-properties]]
=== Kafka Consumer Properties
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
@@ -194,6 +190,7 @@ If not set (the default), it effectively has the same value as `enableDlq`, auto
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
+
Default: `false`.
startOffset::
@@ -251,30 +248,39 @@ Note, the time taken to detect new topics that match the pattern is controlled b
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
[[kafka-producer-properties]]
=== Kafka Producer Properties
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See `NewTopic` javadocs in the `kafka-clients` jar.
+
Default: none.
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
The replication factor to use when provisioning new topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
@@ -307,6 +313,21 @@ configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -314,11 +335,18 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
=== Usage examples
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
==== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
@@ -346,10 +374,10 @@ public class ManuallyAcknowdledgingConsumer {
}
----
==== Example: Security Configuration
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
@@ -361,11 +389,11 @@ spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
===== Using JAAS Configuration Files
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
@@ -378,7 +406,7 @@ The following example shows how to launch a Spring Cloud Stream application with
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
===== Using Spring Boot Properties
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
@@ -435,7 +463,7 @@ Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
==== Example: Pausing and Resuming the Consumer
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
@@ -475,7 +503,7 @@ public class Application {
----
[[kafka-error-channels]]
== Error Channels
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
@@ -489,7 +517,7 @@ There is no automatic handling of producer exceptions (such as sending to a <<ka
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
== Kafka Metrics
=== Kafka Metrics
Kafka binder module exposes the following metrics:
@@ -498,7 +526,7 @@ The metrics provided are based on the Mircometer metrics library. The metric con
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
[[kafka-tombstones]]
== Tombstone Records (null record values)
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
@@ -514,3 +542,54 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.

View File

@@ -1,4 +1,4 @@
== Partitioning with the Kafka Binder
=== Partitioning with the Kafka Binder
Apache Kafka supports topic partitioning natively.
@@ -49,7 +49,6 @@ spring:
output:
destination: partitioned.topic
producer:
partitioned: true
partition-key-expression: headers['partitionKey']
partition-count: 12
----

View File

@@ -10,7 +10,7 @@
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell, Arnaud Jardiné, Soby Chacko
:doctype: book
:toc:
:toclevels: 4
@@ -21,16 +21,22 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:spring-cloud-stream-binder-kafka-docs: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: http://raw.github.com/{github-repo}/{github-tag}
:github-code: http://github.com/{github-repo}/tree/{github-tag}
:github-wiki: http://github.com/{github-repo}/wiki
:github-master-code: http://github.com/{github-repo}/tree/master
:github-raw: https://raw.github.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:github-wiki: https://github.com/{github-repo}/wiki
:github-master-code: https://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
*{spring-cloud-stream-version}*
[#index-link]
{docs-url}spring-cloud-stream/{docs-version}home.html
= Reference Guide
include::overview.adoc[]
@@ -38,6 +44,8 @@ include::dlq.adoc[]
include::partitions.adoc[]
include::kafka-streams.adoc[]
= Appendices
[appendix]
include::building.adoc[]

171
mvnw vendored
View File

@@ -8,7 +8,7 @@
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
@@ -54,38 +54,16 @@ case "`uname`" in
CYGWIN*) cygwin=true ;;
MINGW*) mingw=true;;
Darwin*) darwin=true
#
# Look for the Apple JDKs first to preserve the existing behaviour, and then look
# for the new JDKs provided by Oracle.
#
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Java/JavaVirtualMachines/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L "/Library/Java/JavaVirtualMachines/CurrentJDK" ] ; then
#
# Oracle JDKs
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -x "/usr/libexec/java_home" ]; then
#
# Apple JDKs
#
export JAVA_HOME=`/usr/libexec/java_home`
fi
;;
# Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
# See https://developer.apple.com/library/mac/qa/qa1170/_index.html
if [ -z "$JAVA_HOME" ]; then
if [ -x "/usr/libexec/java_home" ]; then
export JAVA_HOME="`/usr/libexec/java_home`"
else
export JAVA_HOME="/Library/Java/Home"
fi
fi
;;
esac
if [ -z "$JAVA_HOME" ] ; then
@@ -130,7 +108,7 @@ if $cygwin ; then
CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
# For Migwn, ensure paths are in UNIX format before anything is touched
# For Mingw, ensure paths are in UNIX format before anything is touched
if $mingw ; then
[ -n "$M2_HOME" ] &&
M2_HOME="`(cd "$M2_HOME"; pwd)`"
@@ -184,27 +162,28 @@ fi
CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
fi
# traverses directory structure from process work directory to filesystem root
# first directory with .mvn subdirectory is considered project base directory
find_maven_basedir() {
local basedir=$(pwd)
local wdir=$(pwd)
if [ -z "$1" ]
then
echo "Path not specified to find_maven_basedir"
return 1
fi
basedir="$1"
wdir="$1"
while [ "$wdir" != '/' ] ; do
if [ -d "$wdir"/.mvn ] ; then
basedir=$wdir
break
fi
wdir=$(cd "$wdir/.."; pwd)
# workaround for JBEAP-8937 (on Solaris 10/Sparc)
if [ -d "${wdir}" ]; then
wdir=`cd "$wdir/.."; pwd`
fi
# end of workaround
done
echo "${basedir}"
}
@@ -216,30 +195,92 @@ concat_lines() {
fi
}
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-$(find_maven_basedir)}
BASE_DIR=`find_maven_basedir "$(pwd)"`
if [ -z "$BASE_DIR" ]; then
exit 1;
fi
##########################################################################################
# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
# This allows using the maven wrapper in projects that prohibit checking in binary data.
##########################################################################################
if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found .mvn/wrapper/maven-wrapper.jar"
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
while IFS="=" read key value; do
case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
esac
done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
if [ "$MVNW_VERBOSE" = true ]; then
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if command -v wget > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
wget "$jarUrl" -O "$wrapperJarPath"
elif command -v curl > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
curl -o "$wrapperJarPath" "$jarUrl"
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Compiling MavenWrapperDownloader.java ..."
fi
# Compiling the Java class
("$JAVA_HOME/bin/javac" "$javaClass")
fi
if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
# Running the downloader
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Running MavenWrapperDownloader.java ..."
fi
("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
fi
fi
fi
fi
##########################################################################################
# End of extension
##########################################################################################
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
if [ "$MVNW_VERBOSE" = true ]; then
echo $MAVEN_PROJECTBASEDIR
fi
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
[ -n "$MAVEN_PROJECTBASEDIR" ] &&
MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
fi
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
echo "Running version check"
VERSION=$( sed '\!<parent!,\!</parent!d' `dirname $0`/pom.xml | grep '<version' | head -1 | sed -e 's/.*<version>//' -e 's!</version>.*$!!' )
echo "The found version is [${VERSION}]"
if echo $VERSION | egrep -q 'M|RC'; then
echo Activating \"milestone\" profile for version=\"$VERSION\"
echo $MAVEN_ARGS | grep -q milestone || MAVEN_ARGS="$MAVEN_ARGS -Pmilestone"
else
echo Deactivating \"milestone\" profile for version=\"$VERSION\"
echo $MAVEN_ARGS | grep -q milestone && MAVEN_ARGS=$(echo $MAVEN_ARGS | sed -e 's/-Pmilestone//')
fi
exec "$JAVACMD" \
$MAVEN_OPTS \
-classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
"-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
${WRAPPER_LAUNCHER} ${MAVEN_ARGS} "$@"
${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"

306
mvnw.cmd vendored Normal file → Executable file
View File

@@ -1,145 +1,161 @@
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven2 Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
set MAVEN_CMD_LINE_ARGS=%*
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="".\.mvn\wrapper\maven-wrapper.jar""
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CMD_LINE_ARGS%
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven2 Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
FOR /F "tokens=1,2 delims==" %%A IN (%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties) DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
echo Found %WRAPPER_JAR%
) else (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
powershell -Command "(New-Object Net.WebClient).DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"
echo Finished downloading %WRAPPER_JAR%
)
@REM End of extension
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%

54
pom.xml
View File

@@ -1,21 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.1.0.RC1</version>
<version>2.2.0.M2</version>
<relativePath />
</parent>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.0.RELEASE</spring-kafka.version>
<spring-kafka.version>2.2.5.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.RELEASE</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>2.1.0.RC1</spring-cloud-stream.version>
<spring-cloud-stream.version>3.0.0.M1</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka</module>
@@ -108,27 +111,6 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
@@ -177,7 +159,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -188,7 +170,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -196,7 +178,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -206,7 +188,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -217,7 +199,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -225,7 +207,7 @@
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/libs-release-local</url>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -233,4 +215,12 @@
</pluginRepositories>
</profile>
</profiles>
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</project>

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>

View File

@@ -1,18 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<dependencies>

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -37,10 +37,10 @@ public class JaasLoginModuleConfiguration {
private KafkaJaasLoginModuleInitializer.ControlFlag controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.REQUIRED;
private Map<String,String> options = new HashMap<>();
private Map<String, String> options = new HashMap<>();
public String getLoginModule() {
return loginModule;
return this.loginModule;
}
public void setLoginModule(String loginModule) {
@@ -49,19 +49,21 @@ public class JaasLoginModuleConfiguration {
}
public KafkaJaasLoginModuleInitializer.ControlFlag getControlFlag() {
return controlFlag;
return this.controlFlag;
}
public void setControlFlag(String controlFlag) {
Assert.notNull(controlFlag, "cannot be null");
this.controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.valueOf(controlFlag.toUpperCase());
this.controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag
.valueOf(controlFlag.toUpperCase());
}
public Map<String, String> getOptions() {
return options;
return this.options;
}
public void setOptions(Map<String, String> options) {
this.options = options;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
@@ -25,38 +23,17 @@ import java.util.Map;
*
* @author Gary Russell
* @since 2.0
*
* @deprecated in favor of {@link KafkaTopicProperties}
*/
public class KafkaAdminProperties {
private Short replicationFactor;
private Map<Integer, List<Integer>> replicasAssignments = new HashMap<>();
private Map<String, String> configuration = new HashMap<>();
public Short getReplicationFactor() {
return this.replicationFactor;
}
public void setReplicationFactor(Short replicationFactor) {
this.replicationFactor = replicationFactor;
}
public Map<Integer, List<Integer>> getReplicasAssignments() {
return this.replicasAssignments;
}
public void setReplicasAssignments(Map<Integer, List<Integer>> replicasAssignments) {
this.replicasAssignments = replicasAssignments;
}
@Deprecated
public class KafkaAdminProperties extends KafkaTopicProperties {
public Map<String, String> getConfiguration() {
return this.configuration;
return getProperties();
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
setProperties(configuration);
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -25,6 +25,8 @@ import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
@@ -40,18 +42,24 @@ import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Configuration properties for the Kafka binder. The properties in this class are
* prefixed with <b>spring.cloud.stream.kafka.binder</b>.
*
* @author David Turanski
* @author Ilayaperumal Gopinathan
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
* @author Rafal Zukowski
* @author Aldo Sinanaj
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
private static final String DEFAULT_KAFKA_CONNECTION_STRING = "localhost:9092";
private final Log logger = LogFactory.getLog(getClass());
private final Transaction transaction = new Transaction();
private final KafkaProperties kafkaProperties;
@@ -123,18 +131,18 @@ public class KafkaBinderConfigurationProperties {
private JaasLoginModuleConfiguration jaas;
/**
* The bean name of a custom header mapper to use instead of a {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
* The bean name of a custom header mapper to use instead of a
* {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
*/
private String headerMapperBeanName;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
this.kafkaProperties = kafkaProperties;
}
public KafkaProperties getKafkaProperties() {
return kafkaProperties;
return this.kafkaProperties;
}
public Transaction getTransaction() {
@@ -167,7 +175,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the window.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -178,7 +186,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the count.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -189,7 +197,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the timeout.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -249,7 +257,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @param offsetUpdateTimeWindow the window.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -260,7 +268,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @param offsetUpdateCount the count.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -271,7 +279,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @param offsetUpdateShutdownTimeout the timeout.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -324,9 +332,11 @@ public class KafkaBinderConfigurationProperties {
}
/**
* Converts an array of host values to a comma-separated String.
*
* It will append the default port value, if not already specified.
* Converts an array of host values to a comma-separated String. It will append the
* default port value, if not already specified.
* @param hosts host string
* @param defaultPort port
* @return formatted connection string
*/
private String toConnectionString(String[] hosts, String defaultPort) {
String[] fullyFormattedHosts = new String[hosts.length];
@@ -344,7 +354,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the wait.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -355,7 +365,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer user.
* @param maxWait the wait.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -367,10 +377,6 @@ public class KafkaBinderConfigurationProperties {
return this.requiredAcks;
}
public void setRequiredAcks(int requiredAcks) {
this.requiredAcks = String.valueOf(requiredAcks);
}
public void setRequiredAcks(String requiredAcks) {
this.requiredAcks = requiredAcks;
}
@@ -386,7 +392,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -397,7 +403,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @param fetchSize the size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -424,7 +430,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @return the queue size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -435,7 +441,7 @@ public class KafkaBinderConfigurationProperties {
/**
* No longer used.
* @param queueSize the queue size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@@ -463,7 +469,7 @@ public class KafkaBinderConfigurationProperties {
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @return the size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
@@ -475,7 +481,7 @@ public class KafkaBinderConfigurationProperties {
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @param socketBufferSize the size.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
@@ -484,7 +490,7 @@ public class KafkaBinderConfigurationProperties {
}
public Map<String, String> getConfiguration() {
return configuration;
return this.configuration;
}
public void setConfiguration(Map<String, String> configuration) {
@@ -519,15 +525,19 @@ public class KafkaBinderConfigurationProperties {
Map<String, Object> consumerConfiguration = new HashMap<>();
consumerConfiguration.putAll(this.kafkaProperties.buildConsumerProperties());
// Copy configured binder properties that apply to consumers
for (Map.Entry<String, String> configurationEntry : this.configuration.entrySet()) {
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ConsumerConfig.configNames().contains(configurationEntry.getKey())) {
consumerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
consumerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
consumerConfiguration.putAll(this.consumerProperties);
filterStreamManagedConfiguration(consumerConfiguration);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(consumerConfiguration, ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
return getConfigurationWithBootstrapServer(consumerConfiguration,
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
/**
@@ -540,18 +550,41 @@ public class KafkaBinderConfigurationProperties {
Map<String, Object> producerConfiguration = new HashMap<>();
producerConfiguration.putAll(this.kafkaProperties.buildProducerProperties());
// Copy configured binder properties that apply to producers
for (Map.Entry<String, String> configurationEntry : configuration.entrySet()) {
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ProducerConfig.configNames().contains(configurationEntry.getKey())) {
producerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
producerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
producerConfiguration.putAll(this.producerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(producerConfiguration, ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
return getConfigurationWithBootstrapServer(producerConfiguration,
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private Map<String, Object> getConfigurationWithBootstrapServer(Map<String, Object> configuration, String bootstrapServersConfig) {
private void filterStreamManagedConfiguration(Map<String, Object> configuration) {
if (configuration.containsKey(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG)
&& configuration.get(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG).equals(true)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG) +
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG + "=true is not supported by the Kafka binder");
configuration.remove(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
if (configuration.containsKey(ConsumerConfig.GROUP_ID_CONFIG)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.GROUP_ID_CONFIG) +
"Use spring.cloud.stream.default.group or spring.cloud.stream.binding.<name>.group to specify " +
"the group instead of " + ConsumerConfig.GROUP_ID_CONFIG);
configuration.remove(ConsumerConfig.GROUP_ID_CONFIG);
}
}
private String constructIgnoredConfigMessage(String config) {
return String.format("Ignoring provided value(s) for '%s'. ", config);
}
private Map<String, Object> getConfigurationWithBootstrapServer(
Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
@@ -561,7 +594,8 @@ public class KafkaBinderConfigurationProperties {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
if (bootStrapServers.size() == 1
&& bootStrapServers.get(0).equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
@@ -585,6 +619,9 @@ public class KafkaBinderConfigurationProperties {
this.headerMapperBeanName = headerMapperBeanName;
}
/**
* Domain class that models transaction capabilities in Kafka.
*/
public static class Transaction {
private final CombinedProducerProperties producer = new CombinedProducerProperties();
@@ -606,9 +643,10 @@ public class KafkaBinderConfigurationProperties {
}
/**
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties}
* so that common and kafka-specific properties can be set for the transactional
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties} so
* that common and kafka-specific properties can be set for the transactional
* producer.
*
* @since 2.1
*/
public static class CombinedProducerProperties {
@@ -633,8 +671,10 @@ public class KafkaBinderConfigurationProperties {
return this.producerProperties.getPartitionSelectorExpression();
}
public void setPartitionSelectorExpression(Expression partitionSelectorExpression) {
this.producerProperties.setPartitionSelectorExpression(partitionSelectorExpression);
public void setPartitionSelectorExpression(
Expression partitionSelectorExpression) {
this.producerProperties
.setPartitionSelectorExpression(partitionSelectorExpression);
}
public @Min(value = 1, message = "Partition count should be greater than zero.") int getPartitionCount() {
@@ -653,11 +693,13 @@ public class KafkaBinderConfigurationProperties {
this.producerProperties.setRequiredGroups(requiredGroups);
}
public @AssertTrue(message = "Partition key expression and partition key extractor class properties are mutually exclusive.") boolean isValidPartitionKeyProperty() {
public @AssertTrue(message = "Partition key expression and partition key extractor class properties "
+ "are mutually exclusive.") boolean isValidPartitionKeyProperty() {
return this.producerProperties.isValidPartitionKeyProperty();
}
public @AssertTrue(message = "Partition selector class and partition selector expression properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
public @AssertTrue(message = "Partition selector class and partition selector expression "
+ "properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
return this.producerProperties.isValidPartitionSelectorProperty();
}
@@ -690,7 +732,8 @@ public class KafkaBinderConfigurationProperties {
}
public void setPartitionKeyExtractorName(String partitionKeyExtractorName) {
this.producerProperties.setPartitionKeyExtractorName(partitionKeyExtractorName);
this.producerProperties
.setPartitionKeyExtractorName(partitionKeyExtractorName);
}
public String getPartitionSelectorName() {
@@ -757,17 +800,28 @@ public class KafkaBinderConfigurationProperties {
this.kafkaProducerProperties.setConfiguration(configuration);
}
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
}
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
}
public KafkaTopicProperties getTopic() {
return this.kafkaProducerProperties.getTopic();
}
public void setTopic(KafkaTopicProperties topic) {
this.kafkaProducerProperties.setTopic(topic);
}
public KafkaProducerProperties getExtension() {
return this.kafkaProducerProperties;
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,17 +19,19 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Container object for Kafka specific extended producer and consumer binding properties.
*
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider{
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider {
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();
private KafkaProducerProperties producer = new KafkaProducerProperties();
public KafkaConsumerProperties getConsumer() {
return consumer;
return this.consumer;
}
public void setConsumer(KafkaConsumerProperties consumer) {
@@ -37,10 +39,11 @@ public class KafkaBindingProperties implements BinderSpecificPropertiesProvider{
}
public KafkaProducerProperties getProducer() {
return producer;
return this.producer;
}
public void setProducer(KafkaProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,12 +19,16 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
/**
* Extended consumer properties for Kafka binder.
*
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
* @author Soby Chacko
* @author Gary Russell
* @author Aldo Sinanaj
*
* <p>
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
@@ -32,8 +36,18 @@ import java.util.Map;
*/
public class KafkaConsumerProperties {
/**
* Enumeration for starting consumer offset.
*/
public enum StartOffset {
/**
* Starting from earliest offset.
*/
earliest(-2L),
/**
* Starting from latest offset.
*/
latest(-1L);
private final long referencePoint;
@@ -45,13 +59,31 @@ public class KafkaConsumerProperties {
public long getReferencePoint() {
return this.referencePoint;
}
}
/**
* Standard headers for the message.
*/
public enum StandardHeaders {
/**
* No headers.
*/
none,
/**
* Message header representing ID.
*/
id,
/**
* Message header representing timestamp.
*/
timestamp,
/**
* Indicating both ID and timestamp headers.
*/
both
}
private boolean ackEachRecord;
@@ -86,7 +118,7 @@ public class KafkaConsumerProperties {
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
private KafkaTopicProperties topic = new KafkaTopicProperties();
public boolean isAckEachRecord() {
return this.ackEachRecord;
@@ -139,7 +171,7 @@ public class KafkaConsumerProperties {
/**
* No longer used.
* @return the interval.
* @deprecated
* @deprecated No longer used by the binder
*/
@Deprecated
public int getRecoveryInterval() {
@@ -149,7 +181,7 @@ public class KafkaConsumerProperties {
/**
* No longer used.
* @param recoveryInterval the interval.
* @deprecated
* @deprecated No longer needed by the binder
*/
@Deprecated
public void setRecoveryInterval(int recoveryInterval) {
@@ -173,7 +205,7 @@ public class KafkaConsumerProperties {
}
public String getDlqName() {
return dlqName;
return this.dlqName;
}
public void setDlqName(String dlqName) {
@@ -181,7 +213,7 @@ public class KafkaConsumerProperties {
}
public String[] getTrustedPackages() {
return trustedPackages;
return this.trustedPackages;
}
public void setTrustedPackages(String[] trustedPackages) {
@@ -189,12 +221,13 @@ public class KafkaConsumerProperties {
}
public KafkaProducerProperties getDlqProducerProperties() {
return dlqProducerProperties;
return this.dlqProducerProperties;
}
public void setDlqProducerProperties(KafkaProducerProperties dlqProducerProperties) {
this.dlqProducerProperties = dlqProducerProperties;
}
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
@@ -227,12 +260,35 @@ public class KafkaConsumerProperties {
this.destinationIsPattern = destinationIsPattern;
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
return this.admin;
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,19 +16,24 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.AbstractExtendedBindingProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Kafka specific extended binding properties class that extends from
* {@link AbstractExtendedBindingProperties}.
*
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
* @author Oleg Zhurakousky
*/
@ConfigurationProperties("spring.cloud.stream.kafka")
public class KafkaExtendedBindingProperties
extends AbstractExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties, KafkaBindingProperties> {
public class KafkaExtendedBindingProperties extends
AbstractExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties, KafkaBindingProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.default";
@@ -37,8 +42,14 @@ public class KafkaExtendedBindingProperties
return DEFAULTS_PREFIX;
}
@Override
public Map<String, KafkaBindingProperties> getBindings() {
return this.doGetBindings();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaBindingProperties.class;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -21,12 +21,16 @@ import java.util.Map;
import javax.validation.constraints.NotNull;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.expression.Expression;
/**
* Extended producer properties for Kafka binder.
*
* @author Marius Bogoevici
* @author Henryk Konsek
* @author Gary Russell
* @author Aldo Sinanaj
*/
public class KafkaProducerProperties {
@@ -44,7 +48,7 @@ public class KafkaProducerProperties {
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
private KafkaTopicProperties topic = new KafkaTopicProperties();
public int getBufferSize() {
return this.bufferSize;
@@ -80,7 +84,7 @@ public class KafkaProducerProperties {
}
public Expression getMessageKeyExpression() {
return messageKeyExpression;
return this.messageKeyExpression;
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
@@ -103,18 +107,68 @@ public class KafkaProducerProperties {
this.configuration = configuration;
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
return this.admin;
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
/**
* Enumeration for compression types.
*/
public enum CompressionType {
/**
* No compression.
*/
none,
/**
* gzip based compression.
*/
gzip,
snappy
/**
* snappy based compression.
*/
snappy,
/**
* lz4 compression.
*/
lz4,
// /** // TODO: uncomment and fix docs when kafka-clients 2.1.0 or newer is the
// default
// * zstd compression
// */
// zstd
}
}

View File

@@ -0,0 +1,62 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Properties for configuring topics.
*
* @author Aldo Sinanaj
* @since 2.2
*
*/
public class KafkaTopicProperties {
private Short replicationFactor;
private Map<Integer, List<Integer>> replicasAssignments = new HashMap<>();
private Map<String, String> properties = new HashMap<>();
public Short getReplicationFactor() {
return replicationFactor;
}
public void setReplicationFactor(Short replicationFactor) {
this.replicationFactor = replicationFactor;
}
public Map<Integer, List<Integer>> getReplicasAssignments() {
return replicasAssignments;
}
public void setReplicasAssignments(Map<Integer, List<Integer>> replicasAssignments) {
this.replicasAssignments = replicasAssignments;
}
public Map<String, String> getProperties() {
return properties;
}
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -39,16 +39,18 @@ import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.errors.TopicExistsException;
import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaAdminProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaTopicProperties;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
@@ -59,20 +61,25 @@ import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Kafka implementation for {@link ProvisioningProvider}
* Kafka implementation for {@link ProvisioningProvider}.
*
* @author Soby Chacko
* @author Gary Russell
* @author Ilayaperumal Gopinathan
* @author Simon Flandergan
* @author Oleg Zhurakousky
* @author Aldo Sinanaj
*/
public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>,
ExtendedProducerProperties<KafkaProducerProperties>>, InitializingBean {
public class KafkaTopicProvisioner implements
// @checkstyle:off
ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>>,
// @checkstyle:on
InitializingBean {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
@@ -86,15 +93,18 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
private RetryOperations metadataRetryOperations;
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
public KafkaTopicProvisioner(
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
normalalizeBootPropsWithBinder(adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaBinderConfigurationProperties);
}
/**
* Mutator for metadata retry operations.
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
@@ -128,18 +138,22 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
createTopic(adminClient, name, properties.getPartitionCount(), false, properties.getExtension().getAdmin());
createTopic(adminClient, name, properties.getPartitionCount(), false,
properties.getExtension().getTopic());
int partitions = 0;
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = null;
try {
topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception e) {
throw new ProvisioningException("Problems encountered with partitions finding", e);
catch (Exception ex) {
throw new ProvisioningException(
"Problems encountered with partitions finding", ex);
}
TopicDescription topicDescription = topicDescriptions.get(name);
partitions = topicDescription.partitions().size();
@@ -149,7 +163,8 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
@Override
public ConsumerDestination provisionConsumerDestination(final String name, final String group,
public ConsumerDestination provisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (!properties.isMultiplex()) {
return doProvisionConsumerDestination(name, group, properties);
@@ -163,7 +178,8 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
}
private ConsumerDestination doProvisionConsumerDestination(final String name, final String group,
private ConsumerDestination doProvisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (properties.getExtension().isDestinationIsPattern()) {
@@ -185,22 +201,28 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
ConsumerDestination consumerDestination = new KafkaConsumerDestination(name);
try (AdminClient adminClient = createAdminClient()) {
createTopic(adminClient, name, partitionCount, properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getAdmin());
createTopic(adminClient, name, partitionCount,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
try {
Map<String, TopicDescription> topicDescriptions = all.get(operationTimeout, TimeUnit.SECONDS);
Map<String, TopicDescription> topicDescriptions = all
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(name);
int partitions = topicDescription.partitions().size();
consumerDestination = createDlqIfNeedBe(adminClient, name, group, properties, anonymous, partitions);
consumerDestination = createDlqIfNeedBe(adminClient, name, group,
properties, anonymous, partitions);
if (consumerDestination == null) {
consumerDestination = new KafkaConsumerDestination(name, partitions);
consumerDestination = new KafkaConsumerDestination(name,
partitions);
}
}
catch (Exception e) {
throw new ProvisioningException("provisioning exception", e);
catch (Exception ex) {
throw new ProvisioningException("provisioning exception", ex);
}
}
}
@@ -212,22 +234,24 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
/**
* In general, binder properties supersede boot kafka properties.
* The one exception is the bootstrap servers. In that case, we should only override
* the boot properties if (there is a binder property AND it is a non-default value)
* OR (if there is no boot property); this is needed because the binder property
* never returns a null value.
* In general, binder properties supersede boot kafka properties. The one exception is
* the bootstrap servers. In that case, we should only override the boot properties if
* (there is a binder property AND it is a non-default value) OR (if there is no boot
* property); this is needed because the binder property never returns a null value.
* @param adminProps the admin properties to normalize.
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps, KafkaProperties bootProps,
KafkaBinderConfigurationProperties binderProps) {
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
if (ObjectUtils.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConnectionString);
if (ObjectUtils
.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString
.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
// Now override any boot values with binder values
Map<String, String> binderProperties = binderProps.getConfiguration();
@@ -240,21 +264,24 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && this.logger.isDebugEnabled()) {
logger.debug("Overrode boot property: [" + key + "], from: [" + replaced + "] to: [" + value + "]");
this.logger.debug("Overrode boot property: [" + key + "], from: ["
+ replaced + "] to: [" + value + "]");
}
}
});
}
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name, String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName()) ?
properties.getExtension().getDlqName() : "error." + name + "." + group;
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
? properties.getExtension().getDlqName()
: "error." + name + "." + group;
try {
createTopicAndPartitions(adminClient, dlqTopic, partitions,
properties.getExtension().isAutoRebalanceEnabled(), properties.getExtension().getAdmin());
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
@@ -269,27 +296,34 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
return null;
}
private void createTopic(AdminClient adminClient, String name, int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaAdminProperties properties) {
private void createTopic(AdminClient adminClient, String name, int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaTopicProperties properties) {
try {
createTopicIfNecessary(adminClient, name, partitionCount, tolerateLowerPartitionsOnBroker, properties);
createTopicIfNecessary(adminClient, name, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
// TODO: Remove catching Throwable. See this thread:
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
throw new ProvisioningException("Provisioning exception", throwable);
}
}
}
private void createTopicIfNecessary(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties properties) throws Throwable {
private void createTopicIfNecessary(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties properties) throws Throwable {
if (this.configurationProperties.isAutoCreateTopics()) {
createTopicAndPartitions(adminClient, topicName, partitionCount, tolerateLowerPartitionsOnBroker,
properties);
createTopicAndPartitions(adminClient, topicName, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
else {
this.logger.info("Auto creation of topics is disabled.");
@@ -299,85 +333,108 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
/**
* Creates a Kafka topic if needed, or try to increase its partition count to the
* desired number.
* @param adminClient
* @param adminProperties
* @param adminClient kafka admin client
* @param topicName topic name
* @param partitionCount partition count
* @param tolerateLowerPartitionsOnBroker whether lower partitions count on broker is
* tolerated ot not
* @param topicProperties kafka topic properties
* @throws Throwable from topic creation
*/
private void createTopicAndPartitions(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties adminProperties) throws Throwable {
private void createTopicAndPartitions(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties topicProperties) throws Throwable {
ListTopicsResult listTopicsResult = adminClient.listTopics();
KafkaFuture<Set<String>> namesFutures = listTopicsResult.names();
Set<String> names = namesFutures.get(operationTimeout, TimeUnit.SECONDS);
Set<String> names = namesFutures.get(this.operationTimeout, TimeUnit.SECONDS);
if (names.contains(topicName)) {
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties.isAutoAddPartitions()
? Math.max(this.configurationProperties.getMinPartitionCount(), partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture.get(operationTimeout, TimeUnit.SECONDS);
int effectivePartitionCount = this.configurationProperties
.isAutoAddPartitions()
? Math.max(
this.configurationProperties.getMinPartitionCount(),
partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(topicName);
int partitionSize = topicDescription.partitions().size();
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
CreatePartitionsResult partitions = adminClient.createPartitions(
Collections.singletonMap(topicName, NewPartitions.increaseTo(effectivePartitionCount)));
partitions.all().get(operationTimeout, TimeUnit.SECONDS);
CreatePartitionsResult partitions = adminClient
.createPartitions(Collections.singletonMap(topicName,
NewPartitions.increaseTo(effectivePartitionCount)));
partitions.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (effectivePartitionCount - partitionSize) + " idle consumers");
this.logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (effectivePartitionCount - partitionSize)
+ " idle consumers");
}
else {
throw new ProvisioningException("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "Consider either increasing the partition count of the topic or enabling " +
"`autoAddPartitions`");
throw new ProvisioningException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead."
+ "Consider either increasing the partition count of the topic or enabling "
+ "`autoAddPartitions`");
}
}
}
else {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(this.configurationProperties.getMinPartitionCount(),
partitionCount);
this.metadataRetryOperations.execute(context -> {
final int effectivePartitionCount = Math.max(
this.configurationProperties.getMinPartitionCount(), partitionCount);
this.metadataRetryOperations.execute((context) -> {
NewTopic newTopic;
Map<Integer, List<Integer>> replicasAssignments = adminProperties.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName, adminProperties.getReplicasAssignments());
Map<Integer, List<Integer>> replicasAssignments = topicProperties
.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName,
topicProperties.getReplicasAssignments());
}
else {
newTopic = new NewTopic(topicName, effectivePartitionCount,
adminProperties.getReplicationFactor() != null
? adminProperties.getReplicationFactor()
: configurationProperties.getReplicationFactor());
topicProperties.getReplicationFactor() != null
? topicProperties.getReplicationFactor()
: this.configurationProperties
.getReplicationFactor());
}
if (adminProperties.getConfiguration().size() > 0) {
newTopic.configs(adminProperties.getConfiguration());
if (topicProperties.getProperties().size() > 0) {
newTopic.configs(topicProperties.getProperties());
}
CreateTopicsResult createTopicsResult = adminClient.createTopics(Collections.singletonList(newTopic));
CreateTopicsResult createTopicsResult = adminClient
.createTopics(Collections.singletonList(newTopic));
try {
createTopicsResult.all().get(operationTimeout, TimeUnit.SECONDS);
createTopicsResult.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception e) {
if (e instanceof ExecutionException) {
String exceptionMessage = e.getMessage();
if (exceptionMessage.contains("org.apache.kafka.common.errors.TopicExistsException")) {
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName + ". Topic already exists.");
catch (Exception ex) {
if (ex instanceof ExecutionException) {
if (ex.getCause() instanceof TopicExistsException) {
if (this.logger.isWarnEnabled()) {
this.logger.warn("Attempt to create topic: " + topicName
+ ". Topic already exists.");
}
}
else {
logger.error("Failed to create topics", e.getCause());
throw e.getCause();
this.logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
else {
logger.error("Failed to create topics", e.getCause());
throw e.getCause();
this.logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
return null;
@@ -386,32 +443,71 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
}
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable) {
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable, final String topicName) {
try {
return this.metadataRetryOperations
.execute(context -> {
Collection<PartitionInfo> partitions = callable.call();
// do a sanity check on the partition set
int partitionSize = partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ") + "been found instead");
}
return this.metadataRetryOperations.execute((context) -> {
Collection<PartitionInfo> partitions = Collections.emptyList();
try {
// This call may return null or throw an exception.
partitions = callable.call();
}
catch (Exception ex) {
// The above call can potentially throw exceptions such as timeout. If
// we can determine
// that the exception was due to an unknown topic on the broker, just
// simply rethrow that.
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
this.logger.error("Failed to obtain partition information", ex);
}
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
if (CollectionUtils.isEmpty(partitions)) {
try (AdminClient adminClient = AdminClient
.create(this.adminClientProperties)) {
final DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
describeTopicsResult.all().get();
}
catch (ExecutionException ex) {
if (ex.getCause() instanceof UnknownTopicOrPartitionException) {
throw (UnknownTopicOrPartitionException) ex.getCause();
}
return partitions;
});
else {
logger.warn("No partitions have been retrieved for the topic "
+ "(" + topicName
+ "). This will affect the health check.");
}
}
}
// do a sanity check on the partition set
int partitionSize = CollectionUtils.isEmpty(partitions) ? 0 : partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead");
}
}
return partitions;
});
}
catch (Exception e) {
this.logger.error("Cannot initialize Binder", e);
throw new BinderException("Cannot initialize binder:", e);
catch (Exception ex) {
this.logger.error("Cannot initialize Binder", ex);
throw new BinderException("Cannot initialize binder:", ex);
}
}
@@ -428,21 +524,20 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
@Override
public String getName() {
return producerDestinationName;
return this.producerDestinationName;
}
@Override
public String getNameForPartition(int partition) {
return producerDestinationName;
return this.producerDestinationName;
}
@Override
public String toString() {
return "KafkaProducerDestination{" +
"producerDestinationName='" + producerDestinationName + '\'' +
", partitions=" + partitions +
'}';
return "KafkaProducerDestination{" + "producerDestinationName='"
+ producerDestinationName + '\'' + ", partitions=" + partitions + '}';
}
}
private static final class KafkaConsumerDestination implements ConsumerDestination {
@@ -461,7 +556,8 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
this(consumerDestinationName, partitions, null);
}
KafkaConsumerDestination(String consumerDestinationName, Integer partitions, String dlqName) {
KafkaConsumerDestination(String consumerDestinationName, Integer partitions,
String dlqName) {
this.consumerDestinationName = consumerDestinationName;
this.partitions = partitions;
this.dlqName = dlqName;
@@ -474,11 +570,11 @@ public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsu
@Override
public String toString() {
return "KafkaConsumerDestination{" +
"consumerDestinationName='" + consumerDestinationName + '\'' +
", partitions=" + partitions +
", dlqName='" + dlqName + '\'' +
'}';
return "KafkaConsumerDestination{" + "consumerDestinationName='"
+ consumerDestinationName + '\'' + ", partitions=" + partitions
+ ", dlqName='" + dlqName + '\'' + '}';
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.utils;
import java.io.UnsupportedEncodingException;
/**
* Utility methods releated to Kafka topics.
*
* @author Soby Chacko
*/
public final class KafkaTopicUtils {
@@ -28,22 +30,25 @@ public final class KafkaTopicUtils {
}
/**
* Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
* Validate topic name. Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
* @param topicName name of the topic
*/
public static void validateTopicName(String topicName) {
try {
byte[] utf8 = topicName.getBytes("UTF-8");
for (byte b : utf8) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z') || (b >= '0') && (b <= '9') || (b == '.')
|| (b == '-') || (b == '_'))) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z')
|| (b >= '0') && (b <= '9') || (b == '.') || (b == '-')
|| (b == '_'))) {
throw new IllegalArgumentException(
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '" + topicName
+ "'");
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '"
+ topicName + "'");
}
}
}
catch (UnsupportedEncodingException e) {
throw new AssertionError(e); // Can't happen
catch (UnsupportedEncodingException ex) {
throw new AssertionError(ex); // Can't happen
}
}
}

View File

@@ -0,0 +1,108 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaBinderConfigurationPropertiesTest {
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setGroupId("group1");
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setEnableAutoCommit(true);
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConsumerProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConsumerProps() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -35,7 +35,6 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
/**
* @author Gary Russell
* @since 2.0
@@ -47,18 +46,27 @@ public class KafkaTopicProvisionerTests {
@Test
public void bootPropertiesOverriddenExceptServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:1234"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:9092");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@@ -66,33 +74,44 @@ public class KafkaTopicProvisionerTests {
@Test
public void bootPropertiesOverriddenIncludingServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:9092"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:1234");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@Test
public void brokersInvalid() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, "localhost:1234");
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
"localhost:1234");
try {
new KafkaTopicProvisioner(binderConfig, bootConfig);
fail("Expected illegal state");
}
catch (IllegalStateException e) {
assertThat(e.getMessage())
.isEqualTo("Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
assertThat(e.getMessage()).isEqualTo(
"Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
}

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
</parent>
<properties>
@@ -22,6 +22,11 @@
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
@@ -102,14 +107,14 @@
<version>${avro.version}</version>
<executions>
<execution>
<phase>generate-sources</phase>
<phase>generate-test-sources</phase>
<goals>
<goal>schema</goal>
<goal>protocol</goal>
<goal>idl-protocol</goal>
</goals>
<configuration>
<sourceDirectory>src/test/resources/avro</sourceDirectory>
<outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>
<testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>
<testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>
</configuration>
</execution>
</executions>

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -36,27 +36,35 @@ import org.springframework.util.StringUtils;
/**
* An {@link AbstractBinder} implementation for {@link GlobalKTable}.
*
* Provides only consumer binding for the bound {@link GlobalKTable}.
* Output bindings are not allowed on this binder.
* <p>
* Provides only consumer binding for the bound {@link GlobalKTable}. Output bindings are
* not allowed on this binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBinder extends
// @checkstyle:off
AbstractBinder<GlobalKTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
public GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
// @checkstyle:on
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
@@ -64,31 +72,39 @@ public class GlobalKTableBinder extends
@Override
@SuppressWarnings("unchecked")
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name, String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name,
String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
group = this.binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
kafkaTopicProvisioner,
binderConfigurationProperties, properties, kafkaStreamsDlqDispatchers);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name, GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for GlobalKTable");
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name,
GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException(
"No producer level binding is allowed for GlobalKTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
throw new UnsupportedOperationException("No producer binding is allowed and therefore no properties");
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
throw new UnsupportedOperationException(
"No producer binding is allowed and therefore no properties");
}
@Override
@@ -98,7 +114,8 @@ public class GlobalKTableBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,32 +19,58 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for GlobalKTable binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class GlobalKTableBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
return new GlobalKTableBinder(binderConfigurationProperties, kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
public GlobalKTableBinder GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
return new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -27,14 +27,16 @@ import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link GlobalKTable}
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for
* {@link GlobalKTable}
*
* Input bindings are only created as output bindings on GlobalKTable are not allowed.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactory<GlobalKTable> {
public class GlobalKTableBoundElementFactory
extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
@@ -45,12 +47,17 @@ public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactor
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper= new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class, GlobalKTable.class);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper = new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
// @checkstyle:on
ProxyFactory proxyFactory = new ProxyFactory(
GlobalKTableBoundElementFactory.GlobalKTableWrapper.class,
GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
@@ -58,14 +65,21 @@ public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactor
@Override
public GlobalKTable createOutput(String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type GlobalKTable");
throw new UnsupportedOperationException(
"Outbound operations are not allowed on target type GlobalKTable");
}
/**
* Wrapper for GlobalKTable proxy.
*/
public interface GlobalKTableWrapper {
void wrap(GlobalKTable<Object, Object> delegate);
}
private static class GlobalKTableWrapperHandler implements GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private static class GlobalKTableWrapperHandler implements
GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private GlobalKTable<Object, Object> delegate;
@@ -77,17 +91,25 @@ public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactor
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTable.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTable.class)) {
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only GlobalKTable method invocations are permitted");
throw new IllegalStateException(
"Only GlobalKTable method invocations are permitted");
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -30,10 +30,10 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.util.StringUtils;
/**
* Services pertinent to the interactive query capabilities of Kafka Streams. This class provides
* services such as querying for a particular store, which instance is hosting a particular store etc.
* This is part of the public API of the kafka streams binder and the users can inject this service in their
* applications to make use of it.
* Services pertinent to the interactive query capabilities of Kafka Streams. This class
* provides services such as querying for a particular store, which instance is hosting a
* particular store etc. This is part of the public API of the kafka streams binder and
* the users can inject this service in their applications to make use of it.
*
* @author Soby Chacko
* @author Renwei Han
@@ -42,52 +42,54 @@ import org.springframework.util.StringUtils;
public class InteractiveQueryService {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
/**
*
* Constructor for InteractiveQueryService.
* @param kafkaStreamsRegistry holding {@link KafkaStreamsRegistry}
* @param binderConfigurationProperties Kafka Streams binder configuration properties
* @param binderConfigurationProperties kafka Streams binder configuration properties
*/
public InteractiveQueryService(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Retrieve and return a queryable store by name created in the application.
*
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @param <T> generic queryable store
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try{
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
//pass through
// pass through
}
}
return null;
}
/**
* Gets the current {@link HostInfo} that the calling kafka streams application is running on.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* when calling this method. If this is not available, then null is returned.
* Gets the current {@link HostInfo} that the calling kafka streams application is
* running on.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property when calling this method. If this is not available, then
* null is returned.
* @return the current {@link HostInfo}
*/
public HostInfo getCurrentHostInfo() {
Map<String, String> configuration = this.binderConfigurationProperties.getConfiguration();
Map<String, String> configuration = this.binderConfigurationProperties
.getConfiguration();
if (configuration.containsKey("application.server")) {
String applicationServer = configuration.get("application.server");
@@ -99,26 +101,27 @@ public class InteractiveQueryService {
}
/**
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may not be the
* current host that is running the application. Kafka Streams will look through all the consumer instances
* under the same application id and retrieves the proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then null maybe returned.
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may
* not be the current host that is running the application. Kafka Streams will look
* through all the consumer instances under the same application id and retrieves the
* proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property for all the application instances when calling this method.
* If this is not available, then null maybe returned.
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link HostInfo} where the key for the provided store is hosted currently
* @return the {@link HostInfo} where the key for the provided store is hosted
* currently
*/
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
StreamsMetadata streamsMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map(k -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst()
.orElse(null);
.map((k) -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -40,8 +40,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KStream}.
* This implemenation extends from the {@link AbstractBinder} directly.
* {@link org.springframework.cloud.stream.binder.Binder} implementation for
* {@link KStream}. This implemenation extends from the {@link AbstractBinder} directly.
* <p>
* Provides both producer and consumer bindings for the bound KStream.
*
@@ -49,15 +49,22 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
*/
class KStreamBinder extends
// @checkstyle:off
AbstractBinder<KStream<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KStream<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<KStream<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private final static Log LOG = LogFactory.getLog(KStreamBinder.class);
// @checkstyle:on
private static final Log LOG = LogFactory.getLog(KStreamBinder.class);
private final KafkaTopicProvisioner kafkaTopicProvisioner;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
@@ -69,11 +76,11 @@ class KStreamBinder extends
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
@@ -84,56 +91,77 @@ class KStreamBinder extends
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(inputTarget, properties.getExtension());
KStream<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
this.kafkaStreamsBindingInformationCatalogue
.registerConsumerProperties(inputTarget, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
group = this.binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
kafkaTopicProvisioner,
binderConfigurationProperties, properties, kafkaStreamsDlqDispatchers);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name, KStream<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
protected Binding<KStream<Object, Object>> doBindProducer(String name,
KStream<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties, properties.getExtension());
to(properties.isUseNativeEncoding(), name, outboundBindTarget, (Serde<Object>) keySerde, (Serde<Object>) valueSerde);
this.kafkaTopicProvisioner.provisionProducerDestination(name,
extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver
.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension());
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
(Serde<Object>) keySerde, (Serde<Object>) valueSerde);
return new DefaultBinding<>(name, null, outboundBindTarget, null);
}
@SuppressWarnings("unchecked")
private void to(boolean isNativeEncoding, String name, KStream<Object, Object> outboundBindTarget,
Serde<Object> keySerde, Serde<Object> valueSerde) {
private void to(boolean isNativeEncoding, String name,
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde) {
if (!isNativeEncoding) {
LOG.info("Native encoding is disabled for " + name + ". Outbound message conversion done by Spring Cloud Stream.");
kafkaStreamsMessageConversionDelegate.serializeOnOutbound(outboundBindTarget)
LOG.info("Native encoding is disabled for " + name
+ ". Outbound message conversion done by Spring Cloud Stream.");
this.kafkaStreamsMessageConversionDelegate
.serializeOnOutbound(outboundBindTarget)
.to(name, Produced.with(keySerde, valueSerde));
} else {
LOG.info("Native encoding is enabled for " + name + ". Outbound serialization done at the broker.");
}
else {
LOG.info("Native encoding is enabled for " + name
+ ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
}
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedProducerProperties(channelName);
}
public void setKafkaStreamsExtendedBindingProperties(KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
@@ -144,6 +172,8 @@ class KStreamBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,87 +19,82 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.core.type.AnnotationMetadata;
/**
* Configuration for KStream binder.
*
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
*/
@Configuration
@Import({KafkaAutoConfiguration.class, KStreamBinderConfiguration.KStreamMissingBeansRegistrar.class})
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class })
public class KStreamBinderConfiguration {
static class KStreamMissingBeansRegistrar extends KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar {
private static final String BEAN_NAME = "outerContext";
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
super.registerBeanDefinitions(importingClassMetadata, registry);
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition converstionDelegateBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsMessageConversionDelegate.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsMessageConversionDelegate.class.getSimpleName(), converstionDelegateBean);
AbstractBeanDefinition keyValueSerdeResolverBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KeyValueSerdeResolver.class)
.getBeanDefinition();
registry.registerBeanDefinition(KeyValueSerdeResolver.class.getSimpleName(), keyValueSerdeResolverBean);
AbstractBeanDefinition kafkaStreamsExtendedBindingPropertiesBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsExtendedBindingProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsExtendedBindingProperties.class.getSimpleName(), kafkaStreamsExtendedBindingPropertiesBean);
}
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(kafkaStreamsBinderConfigurationProperties,
kafkaProperties);
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public KStreamBinder kStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue,
keyValueSerdeResolver, kafkaStreamsDlqDispatchers);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
public KStreamBinder kStreamBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver,
kafkaStreamsDlqDispatchers);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return kStreamBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsMessageConversionDelegate.class.getSimpleName(),
outerContext.getBean(KafkaStreamsMessageConversionDelegate.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
beanFactory.registerSingleton(KeyValueSerdeResolver.class.getSimpleName(),
outerContext.getBean(KeyValueSerdeResolver.class));
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
};
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -28,10 +28,11 @@ import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for{@link KStream}.
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory}
* for{@link KStream}.
*
* The implementation creates proxies for both input and output binding.
* The actual target will be created downstream through further binding process.
* The implementation creates proxies for both input and output binding. The actual target
* will be created downstream through further binding process.
*
* @author Marius Bogoevici
* @author Soby Chacko
@@ -43,7 +44,7 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
@@ -51,8 +52,9 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
public KStream createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
}
@@ -64,25 +66,32 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
private KStream createProxyForKStream(String name) {
KStreamWrapperHandler wrapper= new KStreamWrapperHandler();
KStreamWrapperHandler wrapper = new KStreamWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(KStreamWrapper.class, KStream.class);
proxyFactory.addAdvice(wrapper);
KStream proxy = (KStream) proxyFactory.getProxy();
//Add the binding properties to the catalogue for later retrieval during further binding steps downstream.
BindingProperties bindingProperties = bindingServiceProperties.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy, bindingProperties);
// Add the binding properties to the catalogue for later retrieval during further
// binding steps downstream.
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy,
bindingProperties);
return proxy;
}
/**
* Wrapper object for KStream proxy.
*/
public interface KStreamWrapper {
void wrap(KStream<Object, Object> delegate);
}
private static class KStreamWrapperHandler implements KStreamWrapper, MethodInterceptor {
private static class KStreamWrapperHandler
implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
@@ -95,16 +104,23 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KStream.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KStream method invocations are permitted");
throw new IllegalStateException(
"Only KStream method invocations are permitted");
}
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,16 +23,21 @@ import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
/**
* {@link StreamListenerParameterAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerParameterAdapter implements StreamListenerParameterAdapter<KStream<?,?>, KStream<?, ?>> {
class KStreamStreamListenerParameterAdapter
implements StreamListenerParameterAdapter<KStream<?, ?>, KStream<?, ?>> {
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
KStreamStreamListenerParameterAdapter(KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KStreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@@ -49,11 +54,14 @@ class KStreamStreamListenerParameterAdapter implements StreamListenerParameterAd
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
if (this.KafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(bindingTarget)) {
if (this.KafkaStreamsBindingInformationCatalogue
.isUseNativeDecoding(bindingTarget)) {
return bindingTarget;
}
else {
return kafkaStreamsMessageConversionDelegate.deserializeOnInbound(valueClass, bindingTarget);
return this.kafkaStreamsMessageConversionDelegate
.deserializeOnInbound(valueClass, bindingTarget);
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,19 +24,24 @@ import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
/**
* {@link StreamListenerResultAdapter} for KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
class KStreamStreamListenerResultAdapter implements
StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
return KStream.class.isAssignableFrom(resultType) && KStream.class.isAssignableFrom(boundElement);
return KStream.class.isAssignableFrom(resultType)
&& KStream.class.isAssignableFrom(boundElement);
}
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult, KStreamBoundElementFactory.KStreamWrapper boundElement) {
public Closeable adapt(KStream streamListenerResult,
KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult);
return new NoOpCloseable();
}
@@ -49,4 +54,5 @@ class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -35,16 +35,21 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KTable}.
* This implemenation extends from the {@link AbstractBinder} directly.
* {@link org.springframework.cloud.stream.binder.Binder} implementation for
* {@link KTable}. This implemenation extends from the {@link AbstractBinder} directly.
*
* Provides only consumer binding for the bound KTable as output bindings are not allowed on it.
* Provides only consumer binding for the bound KTable as output bindings are not allowed
* on it.
*
* @author Soby Chacko
*/
class KTableBinder extends
// @checkstyle:off
AbstractBinder<KTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
implements
ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
@@ -52,10 +57,14 @@ class KTableBinder extends
private Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
// @checkstyle:on
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
@@ -63,31 +72,43 @@ class KTableBinder extends
@Override
@SuppressWarnings("unchecked")
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group, KTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group,
KTable<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
group = this.binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, getApplicationContext(),
kafkaTopicProvisioner,
binderConfigurationProperties, properties, kafkaStreamsDlqDispatchers);
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<KTable<Object, Object>> doBindProducer(String name, KTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for KTable");
protected Binding<KTable<Object, Object>> doBindProducer(String name,
KTable<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
throw new UnsupportedOperationException(
"No producer level binding is allowed for KTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedProducerProperties(channelName);
}
@Override
@@ -97,6 +118,8 @@ class KTableBinder extends
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -28,39 +28,50 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for KTable binder.
*
* @author Soby Chacko
*/
@SuppressWarnings("ALL")
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public KTableBinder kTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties, kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
public KTableBinder kTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
return kStreamBinder;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -27,7 +27,8 @@ import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link KTable}
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for
* {@link KTable}
*
* Input bindings are only created as output bindings on KTable are not allowed.
*
@@ -44,12 +45,14 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
@Override
public KTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
KTableBoundElementFactory.KTableWrapperHandler wrapper= new KTableBoundElementFactory.KTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(KTableBoundElementFactory.KTableWrapper.class, KTable.class);
KTableBoundElementFactory.KTableWrapperHandler wrapper = new KTableBoundElementFactory.KTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(
KTableBoundElementFactory.KTableWrapper.class, KTable.class);
proxyFactory.addAdvice(wrapper);
return (KTable) proxyFactory.getProxy();
@@ -58,14 +61,21 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
@Override
@SuppressWarnings("unchecked")
public KTable createOutput(final String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type KTable");
throw new UnsupportedOperationException(
"Outbound operations are not allowed on target type KTable");
}
/**
* Wrapper for KTable proxy.
*/
public interface KTableWrapper {
void wrap(KTable<Object, Object> delegate);
}
private static class KTableWrapperHandler implements KTableBoundElementFactory.KTableWrapper, MethodInterceptor {
private static class KTableWrapperHandler
implements KTableBoundElementFactory.KTableWrapper, MethodInterceptor {
private KTable<Object, Object> delegate;
@@ -78,16 +88,23 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KTable.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KTableBoundElementFactory.KTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(KTableBoundElementFactory.KTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KTable method invocations are permitted");
throw new IllegalStateException(
"Only KTable method invocations are permitted");
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -25,17 +25,24 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Application support configuration for Kafka Streams binder.
*
* @deprecated Features provided on this class can be directly configured in the application itself using Kafka Streams.
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@Deprecated
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
public TimeWindows configuredTimeWindow(KafkaStreamsApplicationSupportProperties processorProperties) {
public TimeWindows configuredTimeWindow(
KafkaStreamsApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength()).advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
? TimeWindows.of(processorProperties.getTimeWindow().getLength())
.advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
}
}

View File

@@ -0,0 +1,81 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
/**
* Health indicator for Kafka Streams.
*
* @author Arnaud Jardiné
*/
class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry) {
super("Kafka-streams health check failed");
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
up &= kStream.state().isRunning();
builder.withDetails(buildDetails(kStream));
}
builder.status(up ? Status.UP : Status.DOWN);
}
private static Map<String, Object> buildDetails(KafkaStreams kStreams) {
final Map<String, Object> details = new HashMap<>();
if (kStreams.state().isRunning()) {
for (ThreadMetadata metadata : kStreams.localThreadsMetadata()) {
details.put("threadName", metadata.threadName());
details.put("threadState", metadata.threadState());
details.put("activeTasks", taskDetails(metadata.activeTasks()));
details.put("standbyTasks", taskDetails(metadata.standbyTasks()));
}
}
return details;
}
private static Map<String, Object> taskDetails(Set<TaskMetadata> taskMetadata) {
final Map<String, Object> details = new HashMap<>();
for (TaskMetadata metadata : taskMetadata) {
details.put("taskId", metadata.taskId());
details.put("partitions",
metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList()));
}
return details;
}
}

View File

@@ -0,0 +1,42 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Configuration class for Kafka-streams binder health indicator beans.
*
* @author Arnaud Jardiné
*/
@Configuration
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
@ConditionalOnEnabledHealthIndicator("binders")
class KafkaStreamsBinderHealthIndicatorConfiguration {
@Bean
@ConditionalOnBean(KafkaStreamsRegistry.class)
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry);
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -31,25 +31,36 @@ import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Kafka Streams binder configuration.
*
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
@@ -59,72 +70,145 @@ import org.springframework.util.StringUtils;
@AutoConfigureAfter(BindingServiceConfiguration.class)
public class KafkaStreamsBinderSupportAutoConfiguration {
private static final String KSTREAM_BINDER_TYPE = "kstream";
private static final String KTABLE_BINDER_TYPE = "ktable";
private static final String GLOBALKTABLE_BINDER_TYPE = "globalktable";
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(KafkaProperties kafkaProperties) {
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(
KafkaProperties kafkaProperties, ConfigurableEnvironment environment,
BindingServiceProperties properties) {
final Map<String, BinderConfiguration> binderConfigurations = getBinderConfigurations(
properties);
for (Map.Entry<String, BinderConfiguration> entry : binderConfigurations
.entrySet()) {
final BinderConfiguration binderConfiguration = entry.getValue();
final String binderType = binderConfiguration.getBinderType();
if (binderType != null && (binderType.equals(KSTREAM_BINDER_TYPE)
|| binderType.equals(KTABLE_BINDER_TYPE)
|| binderType.equals(GLOBALKTABLE_BINDER_TYPE))) {
Map<String, Object> binderProperties = new HashMap<>();
this.flatten(null, binderConfiguration.getProperties(), binderProperties);
environment.getPropertySources().addFirst(
new MapPropertySource("kafkaStreamsBinderEnv", binderProperties));
}
}
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
// TODO: Lifted from core - good candidate for exposing as a utility method in core.
private static Map<String, BinderConfiguration> getBinderConfigurations(
BindingServiceProperties properties) {
Map<String, BinderConfiguration> binderConfigurations = new HashMap<>();
Map<String, BinderProperties> declaredBinders = properties.getBinders();
for (Map.Entry<String, BinderProperties> binderEntry : declaredBinders
.entrySet()) {
BinderProperties binderProperties = binderEntry.getValue();
binderConfigurations.put(binderEntry.getKey(),
new BinderConfiguration(binderProperties.getType(),
binderProperties.getEnvironment(),
binderProperties.isInheritEnvironment(),
binderProperties.isDefaultCandidate()));
}
return binderConfigurations;
}
// TODO: Lifted from core - good candidate for exposing as a utility method in core.
@SuppressWarnings("unchecked")
private void flatten(String propertyName, Object value,
Map<String, Object> flattenedProperties) {
if (value instanceof Map) {
((Map<Object, Object>) value).forEach((k, v) -> flatten(
(propertyName != null ? propertyName + "." : "") + k, v,
flattenedProperties));
}
else {
flattenedProperties.put(propertyName, value.toString());
}
}
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
Environment environment) {
KafkaProperties kafkaProperties = binderConfigurationProperties.getKafkaProperties();
public KafkaStreamsConfiguration kafkaStreamsConfiguration(
KafkaStreamsBinderConfigurationProperties properties,
Environment environment) {
KafkaProperties kafkaProperties = properties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
if (kafkaProperties.getStreams().getApplicationId() == null) {
String applicationName = environment.getProperty("spring.application.name");
if (applicationName != null) {
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationName);
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationName);
}
}
return new KafkaStreamsConfiguration(streamsProperties);
}
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
public Map<String, Object> streamConfigGlobalProperties(
KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
}
else {
Object bootstrapServerConfig = properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
Object bootstrapServerConfig = properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootstrapServerConfig instanceof String) {
@SuppressWarnings("unchecked")
String bootStrapServers = (String) properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
}
}
}
String binderProvidedApplicationId = binderConfigurationProperties.getApplicationId();
String binderProvidedApplicationId = configProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, binderProvidedApplicationId);
properties.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG,
Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,
Serdes.ByteArraySerde.class.getName());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class.getName());
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class.getName());
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class.getName());
}
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConfiguration())) {
properties.putAll(binderConfigurationProperties.getConfiguration());
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
properties.putAll(configProperties.getConfiguration());
}
return properties.entrySet().stream().collect(
Collectors.toMap(e -> String.valueOf(e.getKey()), Map.Entry::getValue));
Collectors.toMap((e) -> String.valueOf(e.getKey()), Map.Entry::getValue));
}
@Bean
@@ -134,8 +218,11 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public KStreamStreamListenerParameterAdapter kstreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue);
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(
kstreamBoundMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
@@ -147,14 +234,30 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
ObjectProvider<CleanupConfig> cleanupConfig) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(bindingServiceProperties,
kafkaStreamsExtendedBindingProperties, keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
cleanupConfig.getIfUnique());
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
FunctionCatalog functionCatalog, BindableProxyFactory bindableProxyFactory) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), functionCatalog, bindableProxyFactory);
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
@@ -163,25 +266,29 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(CompositeMessageConverterFactory compositeMessageConverterFactory) {
public CompositeNonNativeSerde compositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
public KStreamBoundElementFactory kStreamBoundElementFactory(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamBoundElementFactory(bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
public KTableBoundElementFactory kTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
public KTableBoundElementFactory kTableBoundElementFactory(
BindingServiceProperties bindingServiceProperties) {
return new KTableBoundElementFactory(bindingServiceProperties);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new GlobalKTableBoundElementFactory(bindingServiceProperties);
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(
BindingServiceProperties properties) {
return new GlobalKTableBoundElementFactory(properties);
}
@Bean
@@ -196,20 +303,25 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
@SuppressWarnings("unchecked")
public KeyValueSerdeResolver keyValueSerdeResolver(@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
return new KeyValueSerdeResolver((Map<String, Object>) streamConfigGlobalProperties, kafkaStreamsBinderConfigurationProperties);
@ConditionalOnMissingBean
public KeyValueSerdeResolver keyValueSerdeResolver(
@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties properties) {
return new KeyValueSerdeResolver(
(Map<String, Object>) streamConfigGlobalProperties, properties);
}
@Bean
public QueryableStoreRegistry queryableStoreTypeRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
public QueryableStoreRegistry queryableStoreTypeRegistry(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new QueryableStoreRegistry(kafkaStreamsRegistry);
}
@Bean
public InteractiveQueryService interactiveQueryServices(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new InteractiveQueryService(kafkaStreamsRegistry, binderConfigurationProperties);
public InteractiveQueryService interactiveQueryServices(
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties properties) {
return new InteractiveQueryService(kafkaStreamsRegistry, properties);
}
@Bean
@@ -218,9 +330,10 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
@Bean
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry);
}
@Bean("kafkaStreamsDlqDispatchers")

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,86 +18,69 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.ImportBeanDefinitionRegistrar;
import org.springframework.core.type.AnnotationMetadata;
import org.springframework.util.StringUtils;
/**
* Common methods used by various Kafka Streams types across the binders.
*
* @author Soby Chacko
*/
class KafkaStreamsBinderUtils {
final class KafkaStreamsBinderUtils {
static void prepareConsumerBinding(String name, String group, ApplicationContext context,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
private KafkaStreamsBinderUtils() {
}
static void prepareConsumerBinding(String name, String group,
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
if (binderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group,
extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils
.isEmpty(extendedConsumerProperties.getExtension().getDlqName())
? new KafkaStreamsDlqDispatch(
extendedConsumerProperties.getExtension()
.getDlqName(),
binderConfigurationProperties,
extendedConsumerProperties.getExtension())
: null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
if (StringUtils.isEmpty(
extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName,
binderConfigurationProperties,
extendedConsumerProperties.getExtension());
}
SendToDlqAndContinue sendToDlqAndContinue = context.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
SendToDlqAndContinue sendToDlqAndContinue = context
.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic,
kafkaStreamsDlqDispatch);
kafkaStreamsDlqDispatchers.put(inputTopic, kafkaStreamsDlqDispatch);
}
}
}
static class KafkaStreamsMissingBeansRegistrar implements ImportBeanDefinitionRegistrar {
private static final String BEAN_NAME = "outerContext";
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition configBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBinderConfigurationProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), configBean);
AbstractBeanDefinition catalogueBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBindingInformationCatalogue.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), catalogueBean);
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -30,10 +30,11 @@ import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* A catalogue that provides binding information for Kafka Streams target types such as KStream.
* It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean} and
* {@link StreamsConfig} associated with various {@link org.springframework.cloud.stream.annotation.StreamListener}
* methods in the {@link org.springframework.context.ApplicationContext}.
* A catalogue that provides binding information for Kafka Streams target types such as
* KStream. It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean}
* and {@link StreamsConfig} associated with various
* {@link org.springframework.cloud.stream.annotation.StreamListener} methods in the
* {@link org.springframework.context.ApplicationContext}.
*
* @author Soby Chacko
*/
@@ -46,24 +47,22 @@ class KafkaStreamsBindingInformationCatalogue {
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination
* on the broker.
*
* @param bindingTarget KStream binding target
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
* broker.
* @param bindingTarget binding target for KStream
* @return destination topic on Kafka
*/
String getDestination(KStream<?,?> bindingTarget) {
String getDestination(KStream<?, ?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
return bindingProperties.getDestination();
}
/**
* Is native decoding is enabled on this {@link KStream}.
*
* @param bindingTarget KStream binding target
* @param bindingTarget binding target for KStream
* @return true if native decoding is enabled, fasle otherwise.
*/
boolean isUseNativeDecoding(KStream<?,?> bindingTarget) {
boolean isUseNativeDecoding(KStream<?, ?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
@@ -72,56 +71,55 @@ class KafkaStreamsBindingInformationCatalogue {
}
/**
* Is DLQ enabled for this {@link KStream}
*
* @param bindingTarget KStream binding target
* Is DLQ enabled for this {@link KStream}.
* @param bindingTarget binding target for KStream
* @return true if DLQ is enabled, false otherwise.
*/
boolean isDlqEnabled(KStream<?,?> bindingTarget) {
return consumerProperties.get(bindingTarget).isEnableDlq();
boolean isDlqEnabled(KStream<?, ?> bindingTarget) {
return this.consumerProperties.get(bindingTarget).isEnableDlq();
}
/**
* Retrieve the content type associated with a given {@link KStream}
*
* @param bindingTarget KStream binding target
* Retrieve the content type associated with a given {@link KStream}.
* @param bindingTarget binding target for KStream
* @return content Type associated.
*/
String getContentType(KStream<?,?> bindingTarget) {
String getContentType(KStream<?, ?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
return bindingProperties.getContentType();
}
/**
* Register a cache for bounded KStream -> {@link BindingProperties}
*
* @param bindingTarget KStream binding target
* Register a cache for bounded KStream -> {@link BindingProperties}.
* @param bindingTarget binding target for KStream
* @param bindingProperties {@link BindingProperties} for this KStream
*/
void registerBindingProperties(KStream<?,?> bindingTarget, BindingProperties bindingProperties) {
void registerBindingProperties(KStream<?, ?> bindingTarget,
BindingProperties bindingProperties) {
this.bindingProperties.put(bindingTarget, bindingProperties);
}
/**
* Register a cache for bounded KStream -> {@link KafkaStreamsConsumerProperties}
*
* @param bindingTarget KStream binding target
* @param kafkaStreamsConsumerProperties Consumer properties for this KStream
* Register a cache for bounded KStream -> {@link KafkaStreamsConsumerProperties}.
* @param bindingTarget binding target for KStream
* @param kafkaStreamsConsumerProperties consumer properties for this KStream
*/
void registerConsumerProperties(KStream<?,?> bindingTarget, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
void registerConsumerProperties(KStream<?, ?> bindingTarget,
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
}
/**
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}
*
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean} mapped to the KStream
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}.
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean}
* mapped to the KStream
*/
void addStreamBuilderFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
this.streamsBuilderFactoryBeans.add(streamsBuilderFactoryBean);
}
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
return streamsBuilderFactoryBeans;
return this.streamsBuilderFactoryBeans;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,11 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
import java.util.HashMap;
import java.util.Map;
@@ -42,19 +37,27 @@ import org.springframework.util.ObjectUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
/**
* Send records in error to a DLQ.
*
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
class KafkaStreamsDlqDispatch {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaTemplate<byte[],byte[]> kafkaTemplate;
private final KafkaTemplate<byte[], byte[]> kafkaTemplate;
private final String dlqName;
KafkaStreamsDlqDispatch(String dlqName,
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[],byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(kafkaConsumerProperties.getDlqProducerProperties()),
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
kafkaConsumerProperties.getDlqProducerProperties()),
kafkaBinderConfigurationProperties);
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
@@ -63,55 +66,58 @@ class KafkaStreamsDlqDispatch {
@SuppressWarnings("unchecked")
public void sendToDlq(byte[] key, byte[] value, int partittion) {
ProducerRecord<byte[],byte[]> producerRecord = new ProducerRecord<>(this.dlqName, partittion,
key, value, null);
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(this.dlqName,
partittion, key, value, null);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value)))
.append("'").append(" received from ")
.append(partittion);
ListenableFuture<SendResult<byte[],byte[]>> sentDlq = null;
.append(toDisplayString(ObjectUtils.nullSafeToString(value))).append("'")
.append(" received from ").append(partittion);
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = null;
try {
sentDlq = this.kafkaTemplate.send(producerRecord);
sentDlq.addCallback(new ListenableFutureCallback<SendResult<byte[],byte[]>>() {
sentDlq.addCallback(
new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[],byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger.debug(
"Sent to DLQ " + sb.toString());
}
}
});
@Override
public void onSuccess(SendResult<byte[], byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger
.debug("Sent to DLQ " + sb.toString());
}
}
});
}
catch (Exception ex) {
if (sentDlq == null) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
}
}
private DefaultKafkaProducerFactory<byte[],byte[]> getProducerFactory(ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties.mergedProducerConfiguration();
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
@@ -128,9 +134,10 @@ class KafkaStreamsDlqDispatch {
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
//Always send as byte[] on dlq (the same byte[] that the consumer received)
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
@@ -141,4 +148,5 @@ class KafkaStreamsDlqDispatch {
}
return original.substring(0, 50) + "...";
}
}

View File

@@ -0,0 +1,506 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.TreeSet;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.function.core.FluxedConsumer;
import org.springframework.cloud.function.core.FluxedFunction;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionProcessor.class);
private final BindingServiceProperties bindingServiceProperties;
private final Map<String, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final CleanupConfig cleanupConfig;
private final FunctionCatalog functionCatalog;
private final BindableProxyFactory bindableProxyFactory;
private ConfigurableApplicationContext applicationContext;
private Set<String> origInputs = new TreeSet<>();
private Set<String> origOutputs = new TreeSet<>();
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
CleanupConfig cleanupConfig,
FunctionCatalog functionCatalog,
BindableProxyFactory bindableProxyFactory) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.cleanupConfig = cleanupConfig;
this.functionCatalog = functionCatalog;
this.bindableProxyFactory = bindableProxyFactory;
this.origInputs.addAll(this.bindableProxyFactory.getInputs());
this.origOutputs.addAll(this.bindableProxyFactory.getOutputs());
}
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType) {
int inputCount = 1;
ResolvableType resolvableTypeGeneric = resolvableType.getGeneric(1);
while (resolvableTypeGeneric != null && resolvableTypeGeneric.getRawClass() != null && (resolvableTypeGeneric.getRawClass().equals(Function.class) ||
resolvableTypeGeneric.getRawClass().equals(Consumer.class))) {
inputCount++;
resolvableTypeGeneric = resolvableTypeGeneric.getGeneric(1);
}
final Set<String> inputs = new TreeSet<>(origInputs);
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
final Iterator<String> iterator = inputs.iterator();
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
origInputs.remove(next);
for (int i = 1; i < inputCount; i++) {
if (iterator.hasNext()) {
ResolvableType generic = resolvableType.getGeneric(1);
if (generic.getRawClass() != null &&
(generic.getRawClass().equals(Function.class) ||
generic.getRawClass().equals(Consumer.class))) {
final String next1 = iterator.next();
resolvableTypeMap.put(next1, generic.getGeneric(0));
origInputs.remove(next1);
}
}
}
return resolvableTypeMap;
}
@SuppressWarnings({ "unchecked", "rawtypes" })
public void orchestrateFunctionInvoking(ResolvableType resolvableType, String functionName) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
FluxedConsumer fluxedConsumer = functionCatalog.lookup(FluxedConsumer.class, functionName);
Assert.isTrue(fluxedConsumer != null,
"No corresponding consumer beans found in the catalog");
Object target = fluxedConsumer.getTarget();
Consumer<Object> consumer = Consumer.class.isAssignableFrom(target.getClass()) ? (Consumer) target : null;
if (consumer != null) {
consumer.accept(adaptedInboundArguments[0]);
}
}
else {
Function<Object, Object> function = functionCatalog.lookup(Function.class, functionName);
Object target = null;
if (function instanceof FluxedFunction) {
target = ((FluxedFunction) function).getTarget();
}
function = (Function) target;
Object result = function.apply(adaptedInboundArguments[0]);
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
result = ((Function) result).apply(adaptedInboundArguments[i]);
}
else {
((Consumer) result).accept(adaptedInboundArguments[i]);
result = null;
}
i++;
}
if (result != null) {
final Set<String> outputs = new TreeSet<>(origOutputs);
final Iterator<String> iterator = outputs.iterator();
if (result.getClass().isArray()) {
final int length = ((Object[]) result).length;
String[] methodAnnotatedOutboundNames = new String[length];
for (int j = 0; j < length; j++) {
if (iterator.hasNext()) {
final String next = iterator.next();
methodAnnotatedOutboundNames[j] = next;
this.origOutputs.remove(next);
}
}
Object[] outboundKStreams = (Object[]) result;
int k = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[k++]);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) outboundKStream);
}
}
else {
if (iterator.hasNext()) {
final String next = iterator.next();
Object targetBean = this.applicationContext.getBean(next);
this.origOutputs.remove(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
}
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException("Cannot setup StreamListener for foobar", ex);
}
}
@SuppressWarnings({"unchecked"})
private Object[] adaptAndRetrieveInboundArguments(Map<String, ResolvableType> stringResolvableTypeMap,
String functionName) {
Object[] arguments = new Object[stringResolvableTypeMap.size()];
int i = 0;
for (String input : stringResolvableTypeMap.keySet()) {
Class<?> parameterType = stringResolvableTypeMap.get(input).getRawClass();
if (input != null) {
Assert.isInstanceOf(String.class, input, "Annotation value must be a String");
Object targetBean = applicationContext.getBean(input);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(input);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext, input);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.methodStreamsBuilderFactoryBeanMap.get(functionName);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
KafkaStreamsConsumerProperties extendedConsumerProperties =
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
//get state store spec
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding " + input + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(input, bindingProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper =
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =
(stringResolvableTypeMap.get(input).getGeneric(1).getRawClass() != null)
? (stringResolvableTypeMap.get(input).getGeneric(1).getRawClass()) : Object.class;
if (this.kafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(
(KStream) kStreamWrapper)) {
arguments[i] = stream;
}
else {
arguments[i] = this.kafkaStreamsMessageConversionDelegate.deserializeOnInbound(
valueClass, stream);
}
}
if (arguments[i] == null) {
arguments[i] = stream;
}
Assert.notNull(arguments[i], "problems..");
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
}
i++;
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde,
String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAs(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde,
autoOffsetReset) :
streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination,
String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder,
String destination, String storeName,
Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName,
Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
}
private KStream<?, ?> getkStream(String inboundName,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream =
streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties.getConsumerProperties(inboundName)
.isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName + ". " +
"Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName + ". " +
"Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(String functionName, ApplicationContext applicationContext,
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties",
Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName).getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext.getBean(
"kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration =
new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS, kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration, this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" +
functionName, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" +
functionName, StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,10 +19,12 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
@@ -38,17 +40,18 @@ import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
/**
* Delegate for handling all framework level message conversions inbound and outbound on {@link KStream}.
* If native encoding is not enabled, then serialization will be performed on outbound messages based
* on a contentType. Similarly, if native decoding is not enabled, deserialization will be performed on
* inbound messages based on a contentType. Based on the contentType, a {@link MessageConverter} will
* be resolved.
* Delegate for handling all framework level message conversions inbound and outbound on
* {@link KStream}. If native encoding is not enabled, then serialization will be
* performed on outbound messages based on a contentType. Similarly, if native decoding is
* not enabled, deserialization will be performed on inbound messages based on a
* contentType. Based on the contentType, a {@link MessageConverter} will be resolved.
*
* @author Soby Chacko
*/
public class KafkaStreamsMessageConversionDelegate {
private final static Log LOG = LogFactory.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final Log LOG = LogFactory
.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
@@ -60,10 +63,11 @@ public class KafkaStreamsMessageConversionDelegate {
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
KafkaStreamsMessageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
KafkaStreamsMessageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
@@ -72,85 +76,141 @@ public class KafkaStreamsMessageConversionDelegate {
/**
* Serialize {@link KStream} records on outbound based on contentType.
*
* @param outboundBindTarget outbound KStream target
* @return serialized KStream
*/
@SuppressWarnings("rawtypes")
public KStream serializeOnOutbound(KStream<?,?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue.getContentType(outboundBindTarget);
MessageConverter messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
@SuppressWarnings({"rawtypes", "unchecked"})
public KStream serializeOnOutbound(KStream<?, ?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue
.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
return outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v :
MessageBuilder.withPayload(v).build();
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
return
messageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
kStreamWithEnrichedHeaders.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object key, Object value) {
if (perRecordContentTypeHolder.contentType != null) {
this.context.headers().remove(MessageHeaders.CONTENT_TYPE);
final Header header;
try {
header = new RecordHeader(MessageHeaders.CONTENT_TYPE,
new ObjectMapper().writeValueAsBytes(perRecordContentTypeHolder.contentType));
this.context.headers().add(header);
}
catch (Exception e) {
if (LOG.isDebugEnabled()) {
LOG.debug("Could not add content type header");
}
}
perRecordContentTypeHolder.unsetContentType();
}
}
@Override
public void close() {
}
});
return kStreamWithEnrichedHeaders;
}
/**
* Deserialize incoming {@link KStream} based on content type.
*
* @param valueClass on KStream value
* @param bindingTarget inbound KStream target
* @return deserialized KStream
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
public KStream deserializeOnInbound(Class<?> valueClass, KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
public KStream deserializeOnInbound(Class<?> valueClass,
KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
resolvePerRecordContentType(bindingTarget, perRecordContentTypeHolder);
//Deserialize using a branching strategy
// Deserialize using a branching strategy
KStream<?, ?>[] branch = bindingTarget.branch(
//First filter where the message is converted and return true if everything went well, return false otherwise.
(o, o2) -> {
boolean isValidRecord = false;
// First filter where the message is converted and return true if
// everything went well, return false otherwise.
(o, o2) -> {
boolean isValidRecord = false;
try {
//if the record is a tombstone, ignore and exit from processing further.
if (o2 != null) {
if (o2 instanceof Message || o2 instanceof String || o2 instanceof byte[]) {
Message<?> m1 = null;
if (o2 instanceof Message) {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.fromMessage((Message<?>) o2).setHeader(MessageHeaders.CONTENT_TYPE, perRecordContentTypeHolder.contentType).build() : (Message<?>)o2;
try {
// if the record is a tombstone, ignore and exit from processing
// further.
if (o2 != null) {
if (o2 instanceof Message || o2 instanceof String
|| o2 instanceof byte[]) {
Message<?> m1 = null;
if (o2 instanceof Message) {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.fromMessage((Message<?>) o2)
.setHeader(
MessageHeaders.CONTENT_TYPE,
perRecordContentTypeHolder.contentType)
.build()
: (Message<?>) o2;
}
else {
m1 = perRecordContentTypeHolder.contentType != null
? MessageBuilder.withPayload(o2).setHeader(
MessageHeaders.CONTENT_TYPE,
perRecordContentTypeHolder.contentType)
.build()
: MessageBuilder.withPayload(o2).build();
}
convertAndSetMessage(o, valueClass, messageConverter, m1);
}
else {
m1 = perRecordContentTypeHolder.contentType != null ? MessageBuilder.withPayload(o2)
.setHeader(MessageHeaders.CONTENT_TYPE, perRecordContentTypeHolder.contentType).build() : MessageBuilder.withPayload(o2).build();
keyValueThreadLocal.set(new KeyValue<>(o, o2));
}
convertAndSetMessage(o, valueClass, messageConverter, m1);
isValidRecord = true;
}
else {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
LOG.info(
"Received a tombstone record. This will be skipped from further processing.");
}
isValidRecord = true;
}
else {
LOG.info("Received a tombstone record. This will be skipped from further processing.");
catch (Exception e) {
LOG.warn(
"Deserialization has failed. This will be skipped from further processing.",
e);
// pass through
}
}
catch (Exception ignored) {
//pass through
}
return isValidRecord;
},
//second filter that catches any messages for which an exception thrown in the first filter above.
(k, v) -> true
);
//process errors from the second filter in the branch above.
return isValidRecord;
},
// second filter that catches any messages for which an exception thrown
// in the first filter above.
(k, v) -> true);
// process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1]);
//first branch above is the branch where the messages are converted, let it go through further processing.
// first branch above is the branch where the messages are converted, let it go
// through further processing.
return branch[0].mapValues((o2) -> {
Object objectValue = keyValueThreadLocal.get().value;
keyValueThreadLocal.remove();
@@ -158,17 +218,9 @@ public class KafkaStreamsMessageConversionDelegate {
});
}
private static class PerRecordContentTypeHolder {
String contentType;
void setContentType(String contentType) {
this.contentType = contentType;
}
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void resolvePerRecordContentType(KStream<?, ?> outboundBindTarget, PerRecordContentTypeHolder perRecordContentTypeHolder) {
private void resolvePerRecordContentType(KStream<?, ?> outboundBindTarget,
PerRecordContentTypeHolder perRecordContentTypeHolder) {
outboundBindTarget.process(() -> new Processor() {
ProcessorContext context;
@@ -180,12 +232,15 @@ public class KafkaStreamsMessageConversionDelegate {
@Override
public void process(Object key, Object value) {
final Headers headers = context.headers();
final Iterable<Header> contentTypes = headers.headers(MessageHeaders.CONTENT_TYPE);
final Headers headers = this.context.headers();
final Iterable<Header> contentTypes = headers
.headers(MessageHeaders.CONTENT_TYPE);
if (contentTypes != null && contentTypes.iterator().hasNext()) {
final String contentType = new String(contentTypes.iterator().next().value());
//remove leading and trailing quotes
final String cleanContentType = StringUtils.replace(contentType, "\"", "");
final String contentType = new String(
contentTypes.iterator().next().value());
// remove leading and trailing quotes
final String cleanContentType = StringUtils.replace(contentType, "\"",
"");
perRecordContentTypeHolder.setContentType(cleanContentType);
}
}
@@ -197,7 +252,8 @@ public class KafkaStreamsMessageConversionDelegate {
});
}
private void convertAndSetMessage(Object o, Class<?> valueClass, MessageConverter messageConverter, Message<?> msg) {
private void convertAndSetMessage(Object o, Class<?> valueClass,
MessageConverter messageConverter, Message<?> msg) {
Object result = valueClass.isAssignableFrom(msg.getPayload().getClass())
? msg.getPayload() : messageConverter.fromMessage(msg, valueClass);
@@ -207,7 +263,8 @@ public class KafkaStreamsMessageConversionDelegate {
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget, KStream<?, ?> branch) {
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget,
KStream<?, ?> branch) {
branch.process(() -> new Processor() {
ProcessorContext context;
@@ -218,23 +275,35 @@ public class KafkaStreamsMessageConversionDelegate {
@Override
public void process(Object o, Object o2) {
//Only continue if the record was not a tombstone.
// Only continue if the record was not a tombstone.
if (o2 != null) {
if (kstreamBindingInformationCatalogue.isDlqEnabled(bindingTarget)) {
String destination = context.topic();
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue
.isDlqEnabled(bindingTarget)) {
String destination = this.context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) message.getPayload(), context.partition());
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o,
(byte[]) message.getPayload(),
this.context.partition());
}
else {
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) o2, context.partition());
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o, (byte[]) o2,
this.context.partition());
}
}
else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed.");
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed. "
+ "Stopping further processing of records.");
}
else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
//quietly pass through. No action needed, this is similar to log and continue.
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
// quietly passing through. No action needed, this is similar to
// log and continue.
LOG.error(
"Inbound deserialization failed. Skipping this record and continuing.");
}
}
}
@@ -245,4 +314,19 @@ public class KafkaStreamsMessageConversionDelegate {
}
});
}
private static class PerRecordContentTypeHolder {
String contentType;
void setContentType(String contentType) {
this.contentType = contentType;
}
void unsetContentType() {
this.contentType = null;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -32,15 +32,15 @@ class KafkaStreamsRegistry {
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
Set<KafkaStreams> getKafkaStreams() {
return kafkaStreams;
return this.kafkaStreams;
}
/**
* Register the {@link KafkaStreams} object created in the application.
*
* @param kafkaStreams {@link KafkaStreams} object created in the application
*/
void registerKafkaStreams(KafkaStreams kafkaStreams) {
this.kafkaStreams.add(kafkaStreams);
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,9 +17,11 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
@@ -78,20 +80,23 @@ import org.springframework.util.StringUtils;
/**
* Kafka Streams specific implementation for {@link StreamListenerSetupMethodOrchestrator}
* that overrides the default mechanisms for invoking StreamListener adapters.
*
* <p>
* The orchestration primarily focus on the following areas:
*
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one output values on {@link SendTo}
* 2. Allow multiple inbound bindings for multiple KStream and or KTable/GlobalKTable types.
* 3. Each StreamListener method that it orchestrates gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
* <p>
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one
* output values on {@link SendTo} 2. Allow multiple inbound bindings for multiple KStream
* and or KTable/GlobalKTable types. 3. Each StreamListener method that it orchestrates
* gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
*
* @author Soby Chacko
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
class KafkaStreamsStreamListenerSetupMethodOrchestrator
implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
private final static Log LOG = LogFactory.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private static final Log LOG = LogFactory
.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
private final StreamListenerParameterAdapter streamListenerParameterAdapter;
@@ -107,36 +112,39 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final Map<Method, List<String>> registeredStoresPerMethod = new HashMap<>();
private final CleanupConfig cleanupConfig;
private ConfigurableApplicationContext applicationContext;
KafkaStreamsStreamListenerSetupMethodOrchestrator(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
CleanupConfig cleanupConfig) {
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = streamListenerResultAdapters;
this.streamListenerResultAdapters = listenerResultAdapters;
this.cleanupConfig = cleanupConfig;
}
@Override
public boolean supports(Method method) {
return methodParameterSupports(method) &&
(methodReturnTypeSuppports(method) || Void.TYPE.equals(method.getReturnType()));
return methodParameterSupports(method) && (methodReturnTypeSuppports(method)
|| Void.TYPE.equals(method.getReturnType()));
}
private boolean methodReturnTypeSuppports(Method method) {
Class<?> returnType = method.getReturnType();
if (returnType.equals(KStream.class) ||
(returnType.isArray() && returnType.getComponentType().equals(KStream.class))) {
if (returnType.equals(KStream.class) || (returnType.isArray()
&& returnType.getComponentType().equals(KStream.class))) {
return true;
}
return false;
@@ -157,12 +165,14 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
@Override
@SuppressWarnings({"rawtypes", "unchecked"})
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener, Method method, Object bean) {
public void orchestrateStreamListenerSetupMethod(StreamListener streamListener,
Method method, Object bean) {
String[] methodAnnotatedOutboundNames = getOutboundBindingTargetNames(method);
validateStreamListenerMethod(streamListener, method, methodAnnotatedOutboundNames);
validateStreamListenerMethod(streamListener, method,
methodAnnotatedOutboundNames);
String methodAnnotatedInboundName = streamListener.value();
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method, methodAnnotatedInboundName,
this.applicationContext,
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(method,
methodAnnotatedInboundName, this.applicationContext,
this.streamListenerParameterAdapter);
try {
ReflectionUtils.makeAccessible(method);
@@ -173,9 +183,11 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
Object result = method.invoke(bean, adaptedInboundArguments);
if (result.getClass().isArray()) {
Assert.isTrue(methodAnnotatedOutboundNames.length == ((Object[]) result).length,
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
} else {
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
@@ -183,18 +195,24 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[i++]);
for (StreamListenerResultAdapter streamListenerResultAdapter : streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream, targetBean);
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[i++]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
}
} else {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[0]);
for (StreamListenerResultAdapter streamListenerResultAdapter : streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(result.getClass(), targetBean.getClass())) {
}
else {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(result.getClass(),
targetBean.getClass())) {
streamListenerResultAdapter.adapt(result, targetBean);
break;
}
@@ -202,8 +220,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
}
}
catch (Exception e) {
throw new BeanInitializationException("Cannot setup StreamListener for " + method, e);
catch (Exception ex) {
throw new BeanInitializationException(
"Cannot setup StreamListener for " + method, ex);
}
}
@@ -211,163 +230,220 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
@SuppressWarnings({"unchecked"})
public Object[] adaptAndRetrieveInboundArguments(Method method, String inboundName,
ApplicationContext applicationContext,
StreamListenerParameterAdapter... streamListenerParameterAdapters) {
StreamListenerParameterAdapter... adapters) {
Object[] arguments = new Object[method.getParameterTypes().length];
for (int parameterIndex = 0; parameterIndex < arguments.length; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, parameterIndex);
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Class<?> parameterType = methodParameter.getParameterType();
Object targetReferenceValue = null;
if (methodParameter.hasParameterAnnotation(Input.class)) {
targetReferenceValue = AnnotationUtils.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter.getParameterAnnotation(Input.class);
targetReferenceValue = AnnotationUtils
.getValue(methodParameter.getParameterAnnotation(Input.class));
Input methodAnnotation = methodParameter
.getParameterAnnotation(Input.class);
inboundName = methodAnnotation.value();
}
else if (arguments.length == 1 && StringUtils.hasText(inboundName)) {
targetReferenceValue = inboundName;
}
if (targetReferenceValue != null) {
Assert.isInstanceOf(String.class, targetReferenceValue, "Annotation value must be a String");
Object targetBean = applicationContext.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = bindingServiceProperties.getBindingProperties(inboundName);
Assert.isInstanceOf(String.class, targetReferenceValue,
"Annotation value must be a String");
Object targetBean = applicationContext
.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
buildStreamsBuilderAndRetrieveConfig(method, applicationContext, inboundName);
// Retrieve the StreamsConfig created for this method if available.
// Otherwise, create the StreamsBuilderFactory and get the underlying
// config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
buildStreamsBuilderAndRetrieveConfig(method, applicationContext,
inboundName);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = methodStreamsBuilderFactoryBeanMap.get(method);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
.get(method);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
KafkaStreamsConsumerProperties extendedConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(inboundName);
//get state store spec
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(bindingProperties.getConsumer(), extendedConsumerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties.getStartOffset();
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest : autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest : autoOffsetReset = Topology.AutoOffsetReset.LATEST;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
default: break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding " + inboundName + ". "
LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec, bindingProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset);
KStream<?, ?> stream = getkStream(inboundName, spec,
bindingProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : streamListenerParameterAdapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(), methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter.adapt(kStreamWrapper, methodParameter);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(kStreamWrapper, methodParameter);
break;
}
}
if (arguments[parameterIndex] == null && parameterType.isAssignableFrom(stream.getClass())) {
if (arguments[parameterIndex] == null
&& parameterType.isAssignableFrom(stream.getClass())) {
arguments[parameterIndex] = stream;
}
Assert.notNull(arguments[parameterIndex], "Cannot convert argument " + parameterIndex + " of " + method
+ "from " + stream.getClass() + " to " + parameterType);
Assert.notNull(arguments[parameterIndex],
"Cannot convert argument " + parameterIndex + " of "
+ method + "from " + stream.getClass() + " to "
+ parameterType);
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = bindingServiceProperties.getBindingDestination(inboundName);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde,
valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper = (KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
// wrap the proxy created during the initial target type binding
// with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = bindingServiceProperties.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder,
keySerde, valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper = (GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
// @checkstyle:on
// wrap the proxy created during the initial target type binding
// with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
}
catch (Exception e) {
throw new IllegalStateException(e);
catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
else {
throw new IllegalStateException(StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
throw new IllegalStateException(
StreamListenerErrorMessages.INVALID_DECLARATIVE_METHOD_PARAMETERS);
}
}
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAs(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde, autoOffsetReset ) :
streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private <K,V> KTable<K,V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k,v).withOffsetResetPolicy(autoOffsetReset),
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder,
String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K,V> GlobalKTable<K,V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k,v).withOffsetResetPolicy(autoOffsetReset),
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName, Serde<K> k, Serde<V> v) {
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
.withKeySerde(k).withValueSerde(v);
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
try {
Serde<?> keySerde = this.keyValueSerdeResolver.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver.getStateStoreValueSerde(spec.getValueSerdeString());
Serde<?> keySerde = this.keyValueSerdeResolver
.getStateStoreKeySerde(spec.getKeySerdeString());
Serde<?> valueSerde = this.keyValueSerdeResolver
.getStateStoreValueSerde(spec.getValueSerdeString());
StoreBuilder builder;
switch (spec.getType()) {
case KEYVALUE:
builder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(spec.getName()), keySerde, valueSerde);
break;
case WINDOW:
builder = Stores.windowStoreBuilder(Stores.persistentWindowStore(spec.getName(), spec.getRetention(), 3, spec.getLength(), false),
keySerde,
builder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(spec.getName()), keySerde,
valueSerde);
break;
case WINDOW:
builder = Stores
.windowStoreBuilder(
Stores.persistentWindowStore(spec.getName(),
spec.getRetention(), 3, spec.getLength(), false),
keySerde, valueSerde);
break;
case SESSION:
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(spec.getName(), spec.getRetention()), keySerde, valueSerde);
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(
spec.getName(), spec.getRetention()), keySerde, valueSerde);
break;
default:
throw new UnsupportedOperationException("state store type (" + spec.getType() + ") is not supported!");
throw new UnsupportedOperationException(
"state store type (" + spec.getType() + ") is not supported!");
}
if (spec.isCacheEnabled()) {
builder = builder.withCachingEnabled();
@@ -375,19 +451,19 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
if (spec.isLoggingDisabled()) {
builder = builder.withLoggingDisabled();
}
return builder;
}catch (Exception e) {
LOG.error("failed to build state store exception : " + e);
throw e;
}
catch (Exception ex) {
LOG.error("failed to build state store exception : " + ex);
throw ex;
}
}
private KStream<?, ?> getkStream(String inboundName, KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
private KStream<?, ?> getkStream(String inboundName,
KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde,
Topology.AutoOffsetReset autoOffsetReset) {
if (storeSpec != null) {
StoreBuilder storeBuilder = buildStateStore(storeSpec);
streamsBuilder.addStateStore(storeBuilder);
@@ -395,22 +471,24 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(bindingServiceProperties.getBindingDestination(inboundName));
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream =
streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding){
LOG.info("Native decoding is enabled for " + inboundName + ". Inbound deserialization done at the broker.");
KStream<?, ?> stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName + ". Inbound message conversion done by Spring Cloud Stream.");
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues(value -> {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
@@ -425,72 +503,94 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
private void enableNativeDecodingForKTableAlways(Class<?> parameterType,
BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class)
|| parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
// No framework level message conversion provided for KTable/GlobalKTable, its
// done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(Method method, ApplicationContext applicationContext,
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
private void buildStreamsBuilderAndRetrieveConfig(Method method,
ApplicationContext applicationContext, String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties", Map.class);
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties.putAll(extendedConsumerProperties.getConfiguration());
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
// override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationId);
}
int concurrency = bindingServiceProperties.getConsumerProperties(inboundName).getConcurrency();
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, concurrency);
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext.getBean("kafkaStreamsDlqDispatchers", Map.class);
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
.getBean("kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(
streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS, kafkaStreamsDlqDispatchers);
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration, this.cleanupConfig);
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" + method.getName(), streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" + method.getName(), StreamsBuilderFactoryBean.class);
methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(),
() -> streamsBuilder)
.getRawBeanDefinition();
final String beanNamePostFix = method.getDeclaringClass().getSimpleName() + "-" + method.getName();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostFix, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean(
"&stream-builder-" + beanNamePostFix, StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
private void validateStreamListenerMethod(StreamListener streamListener, Method method, String[] methodAnnotatedOutboundNames) {
private void validateStreamListenerMethod(StreamListener streamListener,
Method method, String[] methodAnnotatedOutboundNames) {
String methodAnnotatedInboundName = streamListener.value();
if (methodAnnotatedOutboundNames != null) {
for (String s : methodAnnotatedOutboundNames) {
if (StringUtils.hasText(s)) {
Assert.isTrue(isDeclarativeOutput(method, s), "Method must be declarative");
Assert.isTrue(isDeclarativeOutput(method, s),
"Method must be declarative");
}
}
}
@@ -498,8 +598,11 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
int methodArgumentsLength = method.getParameterTypes().length;
for (int parameterIndex = 0; parameterIndex < methodArgumentsLength; parameterIndex++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, parameterIndex);
Assert.isTrue(isDeclarativeInput(methodAnnotatedInboundName, methodParameter), "Method must be declarative");
MethodParameter methodParameter = MethodParameter.forExecutable(method,
parameterIndex);
Assert.isTrue(
isDeclarativeInput(methodAnnotatedInboundName, methodParameter),
"Method must be declarative");
}
}
}
@@ -508,23 +611,38 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private boolean isDeclarativeOutput(Method m, String targetBeanName) {
boolean declarative;
Class<?> returnType = m.getReturnType();
if (returnType.isArray()){
if (returnType.isArray()) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch(slpa -> slpa.supports(returnType.getComponentType(), targetBeanClass));
.anyMatch((slpa) -> slpa.supports(returnType.getComponentType(),
targetBeanClass));
return declarative;
}
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
declarative = this.streamListenerResultAdapters.stream()
.anyMatch(slpa -> slpa.supports(returnType, targetBeanClass));
.anyMatch((slpa) -> slpa.supports(returnType, targetBeanClass));
return declarative;
}
@SuppressWarnings("unchecked")
private boolean isDeclarativeInput(String targetBeanName, MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class) && this.applicationContext.containsBean(targetBeanName)) {
private boolean isDeclarativeInput(String targetBeanName,
MethodParameter methodParameter) {
if (!methodParameter.getParameterType().isAssignableFrom(Object.class)
&& this.applicationContext.containsBean(targetBeanName)) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
return this.streamListenerParameterAdapter.supports(targetBeanClass, methodParameter);
if (targetBeanClass != null) {
boolean supports = KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = KTable.class.isAssignableFrom(targetBeanClass)
&& KTable.class.isAssignableFrom(methodParameter.getParameterType());
if (!supports) {
supports = GlobalKTable.class.isAssignableFrom(targetBeanClass)
&& GlobalKTable.class.isAssignableFrom(methodParameter.getParameterType());
}
}
return supports;
}
}
return false;
}
@@ -532,30 +650,38 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private static String[] getOutboundBindingTargetNames(Method method) {
SendTo sendTo = AnnotationUtils.findAnnotation(method, SendTo.class);
if (sendTo != null) {
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()), StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1, "At least one outbound destination need to be provided.");
Assert.isTrue(!ObjectUtils.isEmpty(sendTo.value()),
StreamListenerErrorMessages.ATLEAST_ONE_OUTPUT);
Assert.isTrue(sendTo.value().length >= 1,
"At least one outbound destination need to be provided.");
return sendTo.value();
}
return null;
}
@SuppressWarnings({"unchecked"})
private static KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method, KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
private KafkaStreamsStateStoreProperties buildStateStoreSpec(Method method) {
if (!this.registeredStoresPerMethod.containsKey(method)) {
KafkaStreamsStateStore spec = AnnotationUtils.findAnnotation(method,
KafkaStreamsStateStore.class);
if (spec != null) {
Assert.isTrue(!ObjectUtils.isEmpty(spec.name()), "name cannot be empty");
Assert.isTrue(spec.name().length() >= 1, "name cannot be empty.");
this.registeredStoresPerMethod.put(method, new ArrayList<>());
this.registeredStoresPerMethod.get(method).add(spec.name());
KafkaStreamsStateStoreProperties props = new KafkaStreamsStateStoreProperties();
props.setName(spec.name());
props.setType(spec.type());
props.setLength(spec.lengthMs());
props.setKeySerdeString(spec.keySerde());
props.setRetention(spec.retentionMs());
props.setValueSerdeString(spec.valueSerde());
props.setCacheEnabled(spec.cache());
props.setLoggingDisabled(!spec.logging());
return props;
}
}
return null;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -32,77 +32,81 @@ import org.springframework.util.StringUtils;
/**
* Resolver for key and value Serde.
*
* On the inbound, if native decoding is enabled, then any deserialization on the value is handled by Kafka.
* First, we look for any key/value Serde set on the binding itself, if that is not available then look at the
* common Serde set at the global level. If that fails, it falls back to byte[].
* If native decoding is disabled, then the binder will do the deserialization on value and ignore any Serde set for value
* and rely on the contentType provided. Keys are always deserialized at the broker.
* On the inbound, if native decoding is enabled, then any deserialization on the value is
* handled by Kafka. First, we look for any key/value Serde set on the binding itself, if
* that is not available then look at the common Serde set at the global level. If that
* fails, it falls back to byte[]. If native decoding is disabled, then the binder will do
* the deserialization on value and ignore any Serde set for value and rely on the
* contentType provided. Keys are always deserialized at the broker.
*
*
* Same rules apply on the outbound. If native encoding is enabled, then value serialization is done at the broker using
* any binder level Serde for value, if not using common Serde, if not, then byte[].
* If native encoding is disabled, then the binder will do serialization using a contentType. Keys are always serialized
* by the broker.
* Same rules apply on the outbound. If native encoding is enabled, then value
* serialization is done at the broker using any binder level Serde for value, if not
* using common Serde, if not, then byte[]. If native encoding is disabled, then the
* binder will do serialization using a contentType. Keys are always serialized by the
* broker.
*
* For state store, use serdes class specified in {@link KafkaStreamsStateStore} to create Serde accordingly.
* For state store, use serdes class specified in
* {@link org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore}
* to create Serde accordingly.
*
* @author Soby Chacko
* @author Lei Chen
*/
class KeyValueSerdeResolver {
public class KeyValueSerdeResolver {
private final Map<String,Object> streamConfigGlobalProperties;
private final Map<String, Object> streamConfigGlobalProperties;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
KeyValueSerdeResolver(Map<String,Object> streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KeyValueSerdeResolver(Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.streamConfigGlobalProperties = streamConfigGlobalProperties;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Provide the {@link Serde} for inbound key
*
* @param extendedConsumerProperties binding level extended {@link KafkaStreamsConsumerProperties}
* Provide the {@link Serde} for inbound key.
* @param extendedConsumerProperties binding level extended
* {@link KafkaStreamsConsumerProperties}
* @return configurd {@link Serde} for the inbound key.
*/
public Serde<?> getInboundKeySerde(KafkaStreamsConsumerProperties extendedConsumerProperties) {
public Serde<?> getInboundKeySerde(
KafkaStreamsConsumerProperties extendedConsumerProperties) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString);
}
/**
* Provide the {@link Serde} for inbound value
*
* Provide the {@link Serde} for inbound value.
* @param consumerProperties {@link ConsumerProperties} on binding
* @param extendedConsumerProperties binding level extended {@link KafkaStreamsConsumerProperties}
* @param extendedConsumerProperties binding level extended
* {@link KafkaStreamsConsumerProperties}
* @return configurd {@link Serde} for the inbound value.
*/
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties, KafkaStreamsConsumerProperties extendedConsumerProperties) {
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties,
KafkaStreamsConsumerProperties extendedConsumerProperties) {
Serde<?> valueSerde;
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null &&
consumerProperties.isUseNativeDecoding()) {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString);
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(streamConfigGlobalProperties, false);
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("Serde class not found: ", e);
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
/**
* Provide the {@link Serde} for outbound key
*
* Provide the {@link Serde} for outbound key.
* @param properties binding level extended {@link KafkaStreamsProducerProperties}
* @return configurd {@link Serde} for the outbound key.
*/
@@ -111,32 +115,33 @@ class KeyValueSerdeResolver {
}
/**
* Provide the {@link Serde} for outbound value
*
* Provide the {@link Serde} for outbound value.
* @param producerProperties {@link ProducerProperties} on binding
* @param kafkaStreamsProducerProperties binding level extended {@link KafkaStreamsProducerProperties}
* @param kafkaStreamsProducerProperties binding level extended
* {@link KafkaStreamsProducerProperties}
* @return configurd {@link Serde} for the outbound value.
*/
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties, KafkaStreamsProducerProperties kafkaStreamsProducerProperties) {
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties,
KafkaStreamsProducerProperties kafkaStreamsProducerProperties) {
Serde<?> valueSerde;
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(kafkaStreamsProducerProperties.getValueSerde());
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde());
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(streamConfigGlobalProperties, false);
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("Serde class not found: ", e);
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
/**
* Provide the {@link Serde} for state store
*
* Provide the {@link Serde} for state store.
* @param keySerdeString serde class used for key
* @return {@link Serde} for the state store key.
*/
@@ -145,8 +150,7 @@ class KeyValueSerdeResolver {
}
/**
* Provide the {@link Serde} for state store value
*
* Provide the {@link Serde} for state store value.
* @param valueSerdeString serde class used for value
* @return {@link Serde} for the state store value.
*/
@@ -154,8 +158,8 @@ class KeyValueSerdeResolver {
try {
return getValueSerde(valueSerdeString);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("Serde class not found: ", e);
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
}
@@ -166,27 +170,37 @@ class KeyValueSerdeResolver {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
keySerde = this.binderConfigurationProperties.getConfiguration().containsKey("default.key.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("default.key.serde"), Serde.class) : Serdes.ByteArray();
keySerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.key.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.key.serde"),
Serde.class)
: Serdes.ByteArray();
}
keySerde.configure(streamConfigGlobalProperties, true);
keySerde.configure(this.streamConfigGlobalProperties, true);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("Serde class not found: ", e);
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return keySerde;
}
private Serde<?> getValueSerde(String valueSerdeString) throws ClassNotFoundException {
private Serde<?> getValueSerde(String valueSerdeString)
throws ClassNotFoundException {
Serde<?> valueSerde;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
valueSerde = this.binderConfigurationProperties.getConfiguration().containsKey("default.value.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("default.value.serde"), Serde.class) : Serdes.ByteArray();
valueSerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.value.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.value.serde"),
Serde.class)
: Serdes.ByteArray();
}
return valueSerde;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -21,8 +21,8 @@ import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.QueryableStoreType;
/**
* Registry that contains {@link QueryableStoreType}s those created from
* the user applications.
* Registry that contains {@link QueryableStoreType}s those created from the user
* applications.
*
* @author Soby Chacko
* @author Renwei Han
@@ -39,24 +39,25 @@ public class QueryableStoreRegistry {
/**
* Retrieve and return a queryable store by name created in the application.
*
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @param <T> generic queryable store
* @return queryable store.
* @deprecated in favor of {@link InteractiveQueryService#getQueryableStore(String, QueryableStoreType)}
* @deprecated in favor of
* {@link InteractiveQueryService#getQueryableStore(String, QueryableStoreType)}
*/
public <T> T getQueryableStoreType(String storeName, QueryableStoreType<T> storeType) {
public <T> T getQueryableStoreType(String storeName,
QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try{
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
//pass through
// pass through
}
}
return null;

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -32,64 +32,74 @@ import org.apache.kafka.streams.processor.internals.StreamTask;
import org.springframework.util.ReflectionUtils;
/**
* Custom implementation for {@link DeserializationExceptionHandler} that sends the records
* in error to a DLQ topic, then continue stream processing on new records.
*
* @since 2.0.0
* Custom implementation for {@link DeserializationExceptionHandler} that sends the
* records in error to a DLQ topic, then continue stream processing on new records.
*
* @author Soby Chacko
* @since 2.0.0
*/
public class SendToDlqAndContinue implements DeserializationExceptionHandler{
public class SendToDlqAndContinue implements DeserializationExceptionHandler {
/**
* Key used for DLQ dispatchers.
*/
public static final String KAFKA_STREAMS_DLQ_DISPATCHERS = "spring.cloud.stream.kafka.streams.dlq.dispatchers";
/**
* DLQ dispatcher per topic in the application context. The key here is not the actual DLQ topic
* but the incoming topic that caused the error.
* DLQ dispatcher per topic in the application context. The key here is not the actual
* DLQ topic but the incoming topic that caused the error.
*/
private Map<String, KafkaStreamsDlqDispatch> dlqDispatchers = new HashMap<>();
/**
* For a given topic, send the key/value record to DLQ topic.
*
* @param topic incoming topic that caused the error
* @param key to send
* @param value to send
* @param partition for the topic where this record should be sent
*/
public void sendToDlq(String topic, byte[] key, byte[] value, int partition){
public void sendToDlq(String topic, byte[] key, byte[] value, int partition) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(topic);
kafkaStreamsDlqDispatch.sendToDlq(key,value, partition);
kafkaStreamsDlqDispatch.sendToDlq(key, value, partition);
}
@Override
@SuppressWarnings("unchecked")
public DeserializationHandlerResponse handle(ProcessorContext context, ConsumerRecord<byte[], byte[]> record, Exception exception) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(record.topic());
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(), record.partition());
public DeserializationHandlerResponse handle(ProcessorContext context,
ConsumerRecord<byte[], byte[]> record, Exception exception) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers
.get(record.topic());
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(),
record.partition());
context.commit();
// The following conditional block should be reconsidered when we have a solution for this SO problem:
// The following conditional block should be reconsidered when we have a solution
// for this SO problem:
// https://stackoverflow.com/questions/48470899/kafka-streams-deserialization-handler
// Currently it seems like when deserialization error happens, there is no commits happening and the
// following code will use reflection to get access to the underlying KafkaConsumer.
// It works with Kafka 1.0.0, but there is no guarantee it will work in future versions of kafka as
// Currently it seems like when deserialization error happens, there is no commits
// happening and the
// following code will use reflection to get access to the underlying
// KafkaConsumer.
// It works with Kafka 1.0.0, but there is no guarantee it will work in future
// versions of kafka as
// we access private fields by name using reflection, but it is a temporary fix.
if (context instanceof ProcessorContextImpl){
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl)context;
if (context instanceof ProcessorContextImpl) {
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl) context;
Field task = ReflectionUtils.findField(ProcessorContextImpl.class, "task");
ReflectionUtils.makeAccessible(task);
Object taskField = ReflectionUtils.getField(task, processorContextImpl);
if (taskField.getClass().isAssignableFrom(StreamTask.class)){
StreamTask streamTask = (StreamTask)taskField;
if (taskField.getClass().isAssignableFrom(StreamTask.class)) {
StreamTask streamTask = (StreamTask) taskField;
Field consumer = ReflectionUtils.findField(StreamTask.class, "consumer");
ReflectionUtils.makeAccessible(consumer);
Object kafkaConsumerField = ReflectionUtils.getField(consumer, streamTask);
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)){
KafkaConsumer kafkaConsumer = (KafkaConsumer)kafkaConsumerField;
Object kafkaConsumerField = ReflectionUtils.getField(consumer,
streamTask);
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)) {
KafkaConsumer kafkaConsumer = (KafkaConsumer) kafkaConsumerField;
final Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>();
TopicPartition tp = new TopicPartition(record.topic(), record.partition());
TopicPartition tp = new TopicPartition(record.topic(),
record.partition());
OffsetAndMetadata oam = new OffsetAndMetadata(record.offset() + 1);
consumedOffsetsAndMetadata.put(tp, oam);
kafkaConsumer.commitSync(consumedOffsetsAndMetadata);
@@ -102,10 +112,12 @@ public class SendToDlqAndContinue implements DeserializationExceptionHandler{
@Override
@SuppressWarnings("unchecked")
public void configure(Map<String, ?> configs) {
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs
.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
}
void addKStreamDlqDispatch(String topic, KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch){
void addKStreamDlqDispatch(String topic,
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch) {
this.dlqDispatchers.put(topic, kafkaStreamsDlqDispatch);
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,13 +23,13 @@ import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context
* and start them. As each one completes starting, register the associated KafkaStreams
* object into {@link QueryableStoreRegistry}.
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context and
* start them. As each one completes starting, register the associated KafkaStreams object
* into {@link QueryableStoreRegistry}.
*
* This {@link SmartLifecycle} class ensures that the bean created from it is started very late
* through the bootstrap process by setting the phase value closer to Integer.MAX_VALUE.
* This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* This {@link SmartLifecycle} class ensures that the bean created from it is started very
* late through the bootstrap process by setting the phase value closer to
* Integer.MAX_VALUE. This is to guarantee that the {@link StreamsBuilderFactoryBean} on a
* {@link org.springframework.cloud.stream.annotation.StreamListener} method with multiple
* bindings is only started after all the binding phases have completed successfully.
*
@@ -38,12 +38,14 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private volatile boolean running;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
StreamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@@ -65,34 +67,38 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
public synchronized void start() {
if (!this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans();
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.start();
kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean.getKafkaStreams());
this.kafkaStreamsRegistry.registerKafkaStreams(
streamsBuilderFactoryBean.getKafkaStreams());
}
this.running = true;
} catch (Exception e) {
throw new KafkaException("Could not start stream: ", e);
}
catch (Exception ex) {
throw new KafkaException("Could not start stream: ", ex);
}
}
}
@Override
public synchronized void stop() {
if (this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.stop();
}
}
catch (Exception e) {
throw new IllegalStateException(e);
}
finally {
this.running = false;
public synchronized void stop() {
if (this.running) {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.stop();
}
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
finally {
this.running = false;
}
}
}
@Override

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,26 +24,28 @@ import org.springframework.cloud.stream.annotation.Output;
/**
* Bindable interface for {@link KStream} input and output.
*
* This interface can be used as a bindable interface with {@link org.springframework.cloud.stream.annotation.EnableBinding}
* when both input and output types are single KStream. In other scenarios where multiple types are required, other
* similar bindable interfaces can be created and used. For example, there are cases in which multiple KStreams
* are required on the outbound in the case of KStream branching or multiple input types are required either in the
* form of multiple KStreams and a combination of KStreams and KTables. In those cases, new bindable interfaces compatible
* with the requirements must be created. Here are some examples.
* This interface can be used as a bindable interface with
* {@link org.springframework.cloud.stream.annotation.EnableBinding} when both input and
* output types are single KStream. In other scenarios where multiple types are required,
* other similar bindable interfaces can be created and used. For example, there are cases
* in which multiple KStreams are required on the outbound in the case of KStream
* branching or multiple input types are required either in the form of multiple KStreams
* and a combination of KStreams and KTables. In those cases, new bindable interfaces
* compatible with the requirements must be created. Here are some examples.
*
* <pre class="code">
* interface KStreamBranchProcessor {
* &#064;Input("input")
* KStream<?, ?> input();
* KStream&lt;?, ?&gt; input();
*
* &#064;Output("output-1")
* KStream<?, ?> output1();
* KStream&lt;?, ?&gt; output1();
*
* &#064;Output("output-2")
* KStream<?, ?> output2();
* KStream&lt;?, ?&gt; output2();
*
* &#064;Output("output-3")
* KStream<?, ?> output3();
* KStream&lt;?, ?&gt; output3();
*
* ......
*
@@ -53,13 +55,13 @@ import org.springframework.cloud.stream.annotation.Output;
* <pre class="code">
* interface KStreamKtableProcessor {
* &#064;Input("input-1")
* KStream<?, ?> input1();
* KStream&lt;?, ?&gt; input1();
*
* &#064;Input("input-2")
* KTable<?, ?> input2();
* KTable&lt;?, ?&gt; input2();
*
* &#064;Output("output")
* KStream<?, ?> output();
* KStream&lt;?, ?&gt; output();
*
* ......
*
@@ -72,14 +74,17 @@ import org.springframework.cloud.stream.annotation.Output;
public interface KafkaStreamsProcessor {
/**
* Input binding.
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input")
KStream<?, ?> input();
/**
* Output binding.
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
@@ -24,36 +23,37 @@ import java.lang.annotation.Target;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
/**
* Interface for Kafka Stream state store.
*
* This interface can be used to inject a state store specification into KStream building process so
* that the desired store can be built by StreamBuilder and added to topology for later use by processors.
* This is particularly useful when need to combine stream DSL with low level processor APIs. In those cases,
* if a writable state store is desired in processors, it needs to be created using this annotation.
* Here is the example.
* This interface can be used to inject a state store specification into KStream building
* process so that the desired store can be built by StreamBuilder and added to topology
* for later use by processors. This is particularly useful when need to combine stream
* DSL with low level processor APIs. In those cases, if a writable state store is desired
* in processors, it needs to be created using this annotation. Here is the example.
*
* <pre class="code">
* &#064;StreamListener("input")
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW, size=300000)
* public void process(KStream<Object, Product> input) {
* &#064;KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW,
* size=300000)
* public void process(KStream&lt;Object, Product&gt; input) {
* ......
* }
*</pre>
* </pre>
*
* With that, you should be able to read/write this state store in your processor/transformer code.
* With that, you should be able to read/write this state store in your
* processor/transformer code.
*
* <pre class="code">
* new Processor<Object, Product>() {
* WindowStore<Object, String> state;
* new Processor&lt;Object, Product&gt;() {
* WindowStore&lt;Object, String&gt; state;
* &#064;Override
* public void init(ProcessorContext processorContext) {
* state = (WindowStore)processorContext.getStateStore("mystate");
* ......
* }
* }
*</pre>
* </pre>
*
* @author Lei Chen
*/
@@ -64,42 +64,52 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
public @interface KafkaStreamsStateStore {
/**
* Provides name of the state store.
* @return name of state store.
*/
String name() default "";
/**
* State store type.
* @return {@link KafkaStreamsStateStoreProperties.StoreType} of state store.
*/
KafkaStreamsStateStoreProperties.StoreType type() default KafkaStreamsStateStoreProperties.StoreType.KEYVALUE;
/**
* Serde used for key.
* @return key serde of state store.
*/
String keySerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Serde used for value.
* @return value serde of state store.
*/
String valueSerde() default "org.apache.kafka.common.serialization.Serdes$StringSerde";
/**
* Length in milli-second of Windowed store window.
* @return length in milli-second of window(for windowed store).
*/
long lengthMs() default 0;
/**
* @return the maximum period of time in milli-second to keep each window in this store(for windowed store).
* Retention period for Windowed store windows.
* @return the maximum period of time in milli-second to keep each window in this
* store(for windowed store).
*/
long retentionMs() default 0;
/**
* Whether catching is enabled or not.
* @return whether caching should be enabled on the created store.
*/
boolean cache() default false;
/**
* Whether logging is enabled or not.
* @return whether logging should be enabled on the created store.
*/
boolean logging() default true;
}

View File

@@ -0,0 +1,87 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
import org.springframework.context.annotation.ConditionContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.util.ClassUtils;
/**
* Custom {@link org.springframework.context.annotation.Condition} that detects the presence
* of java.util.Function|Consumer beans. Used for Kafka Streams function support.
*
* @author Soby Chakco
* @since 2.2.0
*/
public class FunctionDetectorCondition extends SpringBootCondition {
@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
if (context != null && context.getBeanFactory() != null) {
Map functionTypes = context.getBeanFactory().getBeansOfType(Function.class);
functionTypes.putAll(context.getBeanFactory().getBeansOfType(Consumer.class));
final Map<String, Object> kstreamFunctions = pruneFunctionBeansForKafkaStreams(functionTypes, context);
if (!kstreamFunctions.isEmpty()) {
return ConditionOutcome.match("Matched. Function/Consumer beans found");
}
else {
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
}
}
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
}
private static <T> Map<String, T> pruneFunctionBeansForKafkaStreams(Map<String, T> originalFunctionBeans,
ConditionContext context) {
final Map<String, T> prunedMap = new HashMap<>();
for (String key : originalFunctionBeans.keySet()) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedMap.put(key, originalFunctionBeans.get(key));
}
}
catch (NoSuchMethodException e) {
//ignore
}
}
return prunedMap;
}
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 2.2.0
*/
@Configuration
@EnableConfigurationProperties(StreamFunctionProperties.class)
public class KafkaStreamsFunctionAutoConfiguration {
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessorInvoker kafkaStreamsFunctionProcessorInvoker(
KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
kafkaStreamsFunctionProcessor);
}
@Bean
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor() {
return new KafkaStreamsFunctionBeanPostProcessor();
}
}

View File

@@ -0,0 +1,78 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Map;
import java.util.TreeMap;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Stream;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.core.ResolvableType;
import org.springframework.util.ClassUtils;
/**
*
* @author Soby Chacko
* @since 2.2.0
*
*/
class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
private ConfigurableListableBeanFactory beanFactory;
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
public Map<String, ResolvableType> getResolvableTypes() {
return this.resolvableTypeMap;
}
@Override
public void afterPropertiesSet() {
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)).forEach(this::extractResolvableTypes);
}
private void extractResolvableTypes(String key) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
this.beanFactory.getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
resolvableTypeMap.put(key, resolvableType);
}
catch (NoSuchMethodException e) {
//ignore
}
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = (ConfigurableListableBeanFactory) beanFactory;
}
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import javax.annotation.PostConstruct;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.core.ResolvableType;
/**
*
* @author Soby Chacko
* @since 2.1.0
*/
class KafkaStreamsFunctionProcessorInvoker {
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
private final Map<String, ResolvableType> resolvableTypeMap;
KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
this.resolvableTypeMap = resolvableTypeMap;
}
@PostConstruct
void invoke() {
resolvableTypeMap.forEach((key, value) ->
this.kafkaStreamsFunctionProcessor.orchestrateFunctionInvoking(value, key));
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Type;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.function.context.WrapperDetector;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionWrapperDetector implements WrapperDetector {
@Override
public boolean isWrapper(Type type) {
if (type instanceof Class<?>) {
Class<?> cls = (Class<?>) type;
return KStream.class.isAssignableFrom(cls) ||
KTable.class.isAssignableFrom(cls) ||
GlobalKTable.class.isAssignableFrom(cls);
}
return false;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,26 +19,32 @@ package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
/**
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications. This class provides
* convenient ways to access the commonly used kafka stream properties from the user application. For example, windowing
* operations are common use cases in stream processing and one can provide window specific properties at runtime and use
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications.
* This class provides convenient ways to access the commonly used kafka stream properties
* from the user application. For example, windowing operations are common use cases in
* stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @deprecated The properties exposed by this class can be used directly on Kafka Streams API in the application.
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
@Deprecated
public class KafkaStreamsApplicationSupportProperties {
private TimeWindow timeWindow;
public TimeWindow getTimeWindow() {
return timeWindow;
return this.timeWindow;
}
public void setTimeWindow(TimeWindow timeWindow) {
this.timeWindow = timeWindow;
}
/**
* Properties required by time windows.
*/
public static class TimeWindow {
private int length;
@@ -46,7 +52,7 @@ public class KafkaStreamsApplicationSupportProperties {
private int advanceBy;
public int getLength() {
return length;
return this.length;
}
public void setLength(int length) {
@@ -54,11 +60,13 @@ public class KafkaStreamsApplicationSupportProperties {
}
public int getAdvanceBy() {
return advanceBy;
return this.advanceBy;
}
public void setAdvanceBy(int advanceBy) {
this.advanceBy = advanceBy;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -20,25 +20,42 @@ import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
/**
* Kafka Streams binder configuration properties.
*
* @author Soby Chacko
* @author Gary Russell
*/
public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfigurationProperties {
public class KafkaStreamsBinderConfigurationProperties
extends KafkaBinderConfigurationProperties {
public KafkaStreamsBinderConfigurationProperties(KafkaProperties kafkaProperties) {
super(kafkaProperties);
}
/**
* Enumeration for various Serde errors.
*/
public enum SerdeError {
/**
* Deserialization error handler with log and continue.
*/
logAndContinue,
/**
* Deserialization error handler with log and fail.
*/
logAndFail,
/**
* Deserialization error handler with DLQ send.
*/
sendToDlq
}
private String applicationId;
public String getApplicationId() {
return applicationId;
return this.applicationId;
}
public void setApplicationId(String applicationId) {
@@ -46,17 +63,19 @@ public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfig
}
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use
* when there is a Serde error. {@link KafkaStreamsBinderConfigurationProperties.SerdeError}
* values are used to provide the exception handler on consumer binding.
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a Serde error.
* {@link KafkaStreamsBinderConfigurationProperties.SerdeError} values are used to
* provide the exception handler on consumer binding.
*/
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
public KafkaStreamsBinderConfigurationProperties.SerdeError getSerdeError() {
return serdeError;
return this.serdeError;
}
public void setSerdeError(KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
public void setSerdeError(
KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
this.serdeError = serdeError;
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,9 @@ package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Extended binding properties holder that delegates to Kafka Streams producer and
* consumer properties.
*
* @author Marius Bogoevici
*/
public class KafkaStreamsBindingProperties implements BinderSpecificPropertiesProvider {
@@ -28,7 +31,7 @@ public class KafkaStreamsBindingProperties implements BinderSpecificPropertiesPr
private KafkaStreamsProducerProperties producer = new KafkaStreamsProducerProperties();
public KafkaStreamsConsumerProperties getConsumer() {
return consumer;
return this.consumer;
}
public void setConsumer(KafkaStreamsConsumerProperties consumer) {
@@ -36,10 +39,11 @@ public class KafkaStreamsBindingProperties implements BinderSpecificPropertiesPr
}
public KafkaStreamsProducerProperties getProducer() {
return producer;
return this.producer;
}
public void setProducer(KafkaStreamsProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
/**
* Extended properties for Kafka Streams consumer.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
@@ -37,12 +39,12 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
private String valueSerde;
/**
* Materialized as a KeyValueStore
* Materialized as a KeyValueStore.
*/
private String materializedAs;
public String getApplicationId() {
return applicationId;
return this.applicationId;
}
public void setApplicationId(String applicationId) {
@@ -50,7 +52,7 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
}
public String getKeySerde() {
return keySerde;
return this.keySerde;
}
public void setKeySerde(String keySerde) {
@@ -58,7 +60,7 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
}
public String getValueSerde() {
return valueSerde;
return this.valueSerde;
}
public void setValueSerde(String valueSerde) {
@@ -66,7 +68,7 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
}
public String getMaterializedAs() {
return materializedAs;
return this.materializedAs;
}
public void setMaterializedAs(String materializedAs) {

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,17 +16,26 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.AbstractExtendedBindingProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Kafka streams specific extended binding properties class that extends from
* {@link AbstractExtendedBindingProperties}.
*
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
public class KafkaStreamsExtendedBindingProperties
extends AbstractExtendedBindingProperties<KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties, KafkaStreamsBindingProperties> {
// @checkstyle:off
extends
AbstractExtendedBindingProperties<KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties, KafkaStreamsBindingProperties> {
// @checkstyle:on
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.streams.default";
@Override
@@ -34,8 +43,14 @@ public class KafkaStreamsExtendedBindingProperties
return DEFAULTS_PREFIX;
}
@Override
public Map<String, KafkaStreamsBindingProperties> getBindings() {
return this.doGetBindings();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaStreamsBindingProperties.class;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
/**
* Extended properties for Kafka Streams producer.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
@@ -35,7 +37,7 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
private String valueSerde;
public String getKeySerde() {
return keySerde;
return this.keySerde;
}
public void setKeySerde(String keySerde) {
@@ -43,10 +45,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
}
public String getValueSerde() {
return valueSerde;
return this.valueSerde;
}
public void setValueSerde(String valueSerde) {
this.valueSerde = valueSerde;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,41 +16,51 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
/**
* Properties for Kafka Streams state store.
*
* @author Lei Chen
*/
public class KafkaStreamsStateStoreProperties {
/**
* Enumeration for store type.
*/
public enum StoreType {
/**
* Key value store.
*/
KEYVALUE("keyvalue"),
/**
* Window store.
*/
WINDOW("window"),
SESSION("session")
;
/**
* Session store.
*/
SESSION("session");
private final String type;
/**
* @param type
*/
StoreType(final String type) {
this.type = type;
}
@Override
public String toString() {
return type;
return this.type;
}
}
/**
* name for this state store
* Name for this state store.
*/
private String name;
/**
* type for this state store
* Type for this state store.
*/
private StoreType type;
@@ -75,18 +85,17 @@ public class KafkaStreamsStateStoreProperties {
private String valueSerdeString;
/**
* Whether enable cache in this state store.
* Whether caching is enabled on this state store.
*/
private boolean cacheEnabled;
/**
* Whether enable logging in this state store.
* Whether logging is enabled on this state store.
*/
private boolean loggingDisabled;
public String getName() {
return name;
return this.name;
}
public void setName(String name) {
@@ -94,7 +103,7 @@ public class KafkaStreamsStateStoreProperties {
}
public StoreType getType() {
return type;
return this.type;
}
public void setType(StoreType type) {
@@ -102,7 +111,7 @@ public class KafkaStreamsStateStoreProperties {
}
public long getLength() {
return length;
return this.length;
}
public void setLength(long length) {
@@ -110,7 +119,7 @@ public class KafkaStreamsStateStoreProperties {
}
public long getRetention() {
return retention;
return this.retention;
}
public void setRetention(long retention) {
@@ -118,7 +127,7 @@ public class KafkaStreamsStateStoreProperties {
}
public String getKeySerdeString() {
return keySerdeString;
return this.keySerdeString;
}
public void setKeySerdeString(String keySerdeString) {
@@ -126,7 +135,7 @@ public class KafkaStreamsStateStoreProperties {
}
public String getValueSerdeString() {
return valueSerdeString;
return this.valueSerdeString;
}
public void setValueSerdeString(String valueSerdeString) {
@@ -134,7 +143,7 @@ public class KafkaStreamsStateStoreProperties {
}
public boolean isCacheEnabled() {
return cacheEnabled;
return this.cacheEnabled;
}
public void setCacheEnabled(boolean cacheEnabled) {
@@ -142,10 +151,11 @@ public class KafkaStreamsStateStoreProperties {
}
public boolean isLoggingDisabled() {
return loggingDisabled;
return this.loggingDisabled;
}
public void setLoggingDisabled(boolean loggingDisabled) {
this.loggingDisabled = loggingDisabled;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -34,40 +34,45 @@ import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s
* from {@link CompositeMessageConverterFactory}.
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverterFactory}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based application,
* the applications can deserialize the incoming Kafka Streams records using the built in
* Avro {@link MessageConverter}. However, this same message conversion approach will not work
* downstream in other operations in the topology for Kafka Streams as some of them needs a
* {@link Serde} instance that can talk to the Spring Cloud Stream provided Schema Registry.
* This implementation will solve that problem.
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde} implementations currently.
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the {@link CompositeNonNativeSerde#configure(Map, boolean)} method
* to configure the {@link Serde} object. At the very least the configuration map must include a key
* called "valueClass" to indicate the type of the target object for deserialization. If any other
* content type other than JSON is needed (only Avro is available now other than JSON), that needs
* to be included in the configuration map with the key "contentType". For example,
* Users of this class must call the
* {@link CompositeNonNativeSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map<String, Object> config = new HashMap<>();
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams application
* that uses Spring Cloud Stream schema registry for schema evolution.
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and typically the applications
* can autowire that bean. This is the expected usage pattern of this class.
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 2.1
*/
@@ -77,15 +82,19 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application", "*+" + AVRO_FORMAT);
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final CompositeNonNativeDeserializer<T> compositeNonNativeDeserializer;
private final CompositeNonNativeSerializer<T> compositeNonNativeSerializer;
public CompositeNonNativeSerde(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(compositeMessageConverterFactory);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(compositeMessageConverterFactory);
public CompositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(
compositeMessageConverterFactory);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(
compositeMessageConverterFactory);
}
@Override
@@ -96,7 +105,7 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
@Override
public void close() {
//No-op
// No-op
}
@Override
@@ -110,12 +119,12 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)){
String contentType = (String)configs.get(MessageHeaders.CONTENT_TYPE);
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if(contentType.contains("avro")) {
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
@@ -130,7 +139,7 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
/**
* Custom {@link Deserializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <U> Parameterized target type for deserialization
* @param <U> parameterized target type for deserialization
*/
private static class CompositeNonNativeDeserializer<U> implements Deserializer<U> {
@@ -140,15 +149,19 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
private Class<?> valueClass;
CompositeNonNativeDeserializer(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
CompositeNonNativeDeserializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER), "Deserializers must provide a configuration for valueClass.");
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class, "Deserializers must provide a valid value for valueClass.");
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@@ -157,30 +170,36 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString()).build();
U messageConverted = (U)messageConverter.fromMessage(message, this.valueClass);
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
//No-op
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <V> Parameterized type for serialization
* @param <V> parameterized type for serialization
*/
private static class CompositeNonNativeSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
CompositeNonNativeSerializer(CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
CompositeNonNativeSerializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
@@ -194,14 +213,16 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = messageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
return (byte[])payload;
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
//No-op
// No-op
}
}
}

View File

@@ -1,5 +1,9 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration
org.springframework.cloud.function.context.WrapperDetector=\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionWrapperDetector

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,7 +18,6 @@ package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.boot.WebApplicationType;
@@ -33,7 +32,6 @@ import org.springframework.kafka.test.rule.KafkaEmbedded;
/**
* @author Soby Chacko
*/
@Ignore("Temporarily disabling the test as builds are getting slower due to this.")
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
@@ -41,24 +39,34 @@ public class KafkaStreamsBinderBootstrapTest {
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.input.destination=foo",
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.bindings.input.binder=kBind1",
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.binders.kBind1.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring"
+ ".cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
applicationContext.close();
}
@Test
public void testKafkaStreamsBinderWithStandardConfigurationCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
applicationContext.close();
}
@@ -71,10 +79,14 @@ public class KafkaStreamsBinderBootstrapTest {
public void handle(@Input("input") KStream<Object, String> stream) {
}
}
interface StreamSourceProcessor {
@Input("input")
KStream<?, ?> inputStream();
}
}

View File

@@ -0,0 +1,224 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId" +
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings("unchecked")
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}

View File

@@ -0,0 +1,186 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountFunction() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
pf.destroy();
}
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
}
}

View File

@@ -0,0 +1,158 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.apache.kafka.streams.state.WindowStore;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsFunctionStateStoreTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKafkaStreamsFuncionWithMultipleStateStores() throws Exception {
SpringApplication app = new SpringApplication(StateStoreTestApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
Thread.sleep(2000L);
StateStoreTestApplication processorApplication = context
.getBean(StateStoreTestApplication.class);
KeyValueStore<Long, Long> state1 = processorApplication.state1;
assertThat(processorApplication.processed).isTrue();
assertThat(state1 != null).isTrue();
assertThat(state1.name()).isEqualTo("my-store");
WindowStore<Long, Long> state2 = processorApplication.state2;
assertThat(state2 != null).isTrue();
assertThat(state2.name()).isEqualTo("other-store");
assertThat(state2.persistent()).isTrue();
}
finally {
pf.destroy();
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
static class StateStoreTestApplication {
KeyValueStore<Long, Long> state1;
WindowStore<Long, Long> state2;
boolean processed;
@Bean
public java.util.function.Consumer<KStream<Object, String>> process() {
return input ->
input.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
state1 = (KeyValueStore<Long, Long>) context.getStateStore("my-store");
state2 = (WindowStore<Long, Long>) context.getStateStore("other-store");
}
@Override
public void process(Object key, String value) {
processed = true;
}
@Override
public void close() {
if (state1 != null) {
state1.close();
}
if (state2 != null) {
state2.close();
}
}
}, "my-store", "other-store");
}
@Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.Long(),
Serdes.Long());
}
@Bean
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
1L, 3, 3L, false), Serdes.Long(),
Serdes.Long());
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
}
}

View File

@@ -0,0 +1,352 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class StreamToGlobalKTableFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-y.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams." +
"function.StreamToGlobalKTableFunctionTests$EnrichedOrderSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
CustomerSerde customerSerde = new CustomerSerde();
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
customerSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Customer> pfCustomer =
new DefaultKafkaProducerFactory<>(senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer, true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
customer.setName("customer-" + i);
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
ProductSerde productSerde = new ProductSerde();
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, productSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Product> pfProduct =
new DefaultKafkaProducerFactory<>(senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct, true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
final Product product = new Product();
product.setName("product-" + i);
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
OrderSerde orderSerde = new OrderSerde();
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, orderSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
for (long i = 0; i < 5; i++) {
final Order order = new Order();
order.setCustomerId(i);
order.setProductId(i);
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
EnrichedOrderSerde enrichedOrderSerde = new EnrichedOrderSerde();
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
enrichedOrderSerde.deserializer().getClass());
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams." +
"function.StreamToGlobalKTableFunctionTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i)).isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i)).isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
pfOrder.destroy();
consumer.close();
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@Bean
public Function<KStream<Long, Order>,
Function<GlobalKTable<Long, Customer>,
Function<GlobalKTable<Long, Product>, KStream<Long, EnrichedOrder>>>> process() {
return orderStream -> (
customers -> (
products -> (
orderStream.join(customers,
(orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order))
.join(products,
(orderId, customerOrder) -> customerOrder
.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
})
)
)
);
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
return customerId;
}
public void setCustomerId(long customerId) {
this.customerId = customerId;
}
public long getProductId() {
return productId;
}
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class Product {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}
public Customer getCustomer() {
return customer;
}
public void setCustomer(Customer customer) {
this.customer = customer;
}
public Order getOrder() {
return order;
}
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
this.customer = customer;
this.order = order;
}
long productId() {
return order.getProductId();
}
}
public static class OrderSerde extends JsonSerde<Order> {
}
public static class CustomerSerde extends JsonSerde<Customer> {
}
public static class ProductSerde extends JsonSerde<Product> {
}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {
}
}

View File

@@ -0,0 +1,409 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class StreamToTableJoinFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1,
true, "output-topic-1", "output-topic-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process1",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"),
new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"),
new KeyValue<>("alice", "europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"),
new KeyValue<>("fang", "asia")
);
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L),
new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L),
new KeyValue<>("fang", 99L)
);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L),
new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L)
);
//Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
// Produce data first to the input topic to test the startOffset setting on the
// binding (which is set to earliest below).
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L)
);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-2");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id" +
"=StreamToTableJoinFunctionTests-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"),
new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"),
new KeyValue<>("alice", "europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"),
new KeyValue<>("fang", "asia")
);
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-2");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L),
new KeyValue<>("fang", 99L)
);
for (KeyValue<String, Long> keyValue : userClicks1) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L),
new KeyValue<>("europe", 56L),
new KeyValue<>("asia", 124L),
//1000 alice entries which were there in the topic before the consumer started.
//Since we set the startOffset to earliest for the topic, it will read them,
//but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L)
);
//Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000);
assertThat(count).isEqualTo(expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
/**
* Tuple for a region and its associated number of clicks.
*/
private static final class RegionWithClicks {
private final String region;
private final long clicks;
RegionWithClicks(String region, long clicks) {
if (region == null || region.isEmpty()) {
throw new IllegalArgumentException("region must be set");
}
if (clicks < 0) {
throw new IllegalArgumentException("clicks must not be negative");
}
this.region = region;
this.clicks = clicks;
}
public String getRegion() {
return region;
}
public long getClicks() {
return clicks;
}
}
@EnableBinding(KStreamKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process1() {
return userClicksStream -> (userRegionsTable -> (userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.toStream()));
}
}
interface KStreamKTableProcessor {
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-1")
KStream<?, ?> input1();
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-2")
KTable<?, ?> input2();
/**
* Output binding.
*
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -69,27 +69,32 @@ import static org.mockito.Mockito.verify;
public abstract class DeserializationErrorHandlerByKafkaTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts", "error.words.group",
"error.word1.groupx", "error.word2.groupx");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "error.words.group", "error.word1.groupx", "error.word2.groupx");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port","0");
System.setProperty("spring.jmx.enabled","false");
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("fooc", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("fooc", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@@ -99,42 +104,50 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
consumer.close();
}
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=" +
"org.apache.kafka.common.serialization.Serdes$IntegerSerde"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class DeserializationByKafkaAndDlqTests extends DeserializationErrorHandlerByKafkaTests {
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTests
extends DeserializationErrorHandlerByKafkaTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.words.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1, "error.words.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.words.group");
assertThat(cr.value().equals("foobar")).isTrue();
//Ensuring that the deserialization was indeed done by Kafka natively
verify(KafkaStreamsMessageConversionDelegate, never()).deserializeOnInbound(any(Class.class), any(KStream.class));
verify(KafkaStreamsMessageConversionDelegate, never()).serializeOnOutbound(any(KStream.class));
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
@@ -142,17 +155,18 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
"spring.cloud.stream.kafka.streams.default.consumer.applicationId=deser-kafka-dlq-multi-input",
"spring.cloud.stream.bindings.input.group=groupx",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=" +
"org.apache.kafka.common.serialization.Serdes$IntegerSerde"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class DeserializationByKafkaAndDlqTestsWithMultipleInputs extends DeserializationErrorHandlerByKafkaTests {
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTestsWithMultipleInputs
extends DeserializationErrorHandlerByKafkaTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("word1");
template.sendDefault("foobar");
@@ -160,22 +174,30 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
template.setDefaultTopic("word2");
template.sendDefault("foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobarx", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobarx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.word1.groupx", "error.word2.groupx");
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.word1.groupx",
"error.word2.groupx");
//TODO: Investigate why the ordering matters below: i.e. if we consume from error.word1.groupx first, an exception is thrown.
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1, "error.word2.groupx");
// TODO: Investigate why the ordering matters below: i.e.
// if we consume from error.word1.groupx first, an exception is thrown.
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word2.groupx");
assertThat(cr1.value().equals("foobar")).isTrue();
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1, "error.word1.groupx");
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word1.groupx");
assertThat(cr2.value().equals("foobar")).isTrue();
//Ensuring that the deserialization was indeed done by Kafka natively
verify(KafkaStreamsMessageConversionDelegate, never()).deserializeOnInbound(any(Class.class), any(KStream.class));
verify(KafkaStreamsMessageConversionDelegate, never()).serializeOnOutbound(any(KStream.class));
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -192,14 +214,15 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
public KStream<?, String> process(KStream<Object, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows)
.count(Materialized.as("foo-WordCounts-x"))
.toStream()
.map((key, value) -> new KeyValue<>(null, "Count for " + key.key() + " : " + value));
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -63,28 +63,35 @@ import static org.mockito.Mockito.verify;
public abstract class DeserializtionErrorHandlerByBinderTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id", "error.foos.foobar-group",
"error.foos1.fooz-group", "error.foos2.fooz-group");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id", "error.foos.foobar-group", "error.foos1.fooz-group",
"error.foos2.fooz-group");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<Integer, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port","0");
System.setProperty("spring.jmx.enabled","false");
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foob", "false", embeddedKafka);
//consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class.getName());
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foob", "false",
embeddedKafka);
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@@ -94,64 +101,75 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
consumer.close();
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=foos",
@SpringBootTest(properties = { "spring.cloud.stream.bindings.input.destination=foos",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.bindings.output.producer.headerMode=raw",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deserializationByBinderAndDlqTests",
"spring.cloud.stream.bindings.input.group=foobar-group"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class DeserializationByBinderAndDlqTests extends DeserializtionErrorHandlerByBinderTests {
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqTests
extends DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.foos.foobar-group");
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1,
"error.foos.foobar-group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1, "error.foos.foobar-group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos.foobar-group");
assertThat(cr.value().equals("hello")).isTrue();
//Ensuring that the deserialization was indeed done by the binder
verify(KafkaStreamsMessageConversionDelegate).deserializeOnInbound(any(Class.class), any(KStream.class));
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=foos1,foos2",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deserializationByBinderAndDlqTestsWithMultipleInputs",
"spring.cloud.stream.bindings.input.group=fooz-group"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class DeserializationByBinderAndDlqTestsWithMultipleInputs extends DeserializtionErrorHandlerByBinderTests {
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTestsWithMultipleInputs",
"spring.cloud.stream.bindings.input.group=fooz-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqTestsWithMultipleInputs
extends DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos1");
template.sendDefault("hello");
@@ -159,21 +177,28 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
template.setDefaultTopic("foos2");
template.sendDefault("hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar1", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.foos1.fooz-group", "error.foos2.fooz-group");
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.foos1.fooz-group",
"error.foos2.fooz-group");
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1, "error.foos1.fooz-group");
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos1.fooz-group");
assertThat(cr1.value().equals("hello")).isTrue();
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1, "error.foos2.fooz-group");
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos2.fooz-group");
assertThat(cr2.value().equals("hello")).isTrue();
//Ensuring that the deserialization was indeed done by the binder
verify(KafkaStreamsMessageConversionDelegate).deserializeOnInbound(any(Class.class), any(KStream.class));
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -183,16 +208,17 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input
.filter((key, product) -> product.getId() == 123)
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class), new JsonSerde<>(Product.class)))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store-x"))
.toStream()
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id, value));
}
}
static class Product {
Integer id;
@@ -204,5 +230,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -0,0 +1,319 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.kstream.KStream;
import org.assertj.core.util.Lists;
import org.junit.Assert;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Arnaud Jardiné
*/
public class KafkaStreamsBinderHealthIndicatorTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"out", "out2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@BeforeClass
public static void setUp() {
System.setProperty("logging.level.org.apache.kafka", "OFF");
}
@Test
public void healthIndicatorUpTest() throws Exception {
try (ConfigurableApplicationContext context = singleStream()) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in", "{\"id\":\"123\"}")),
Status.UP, "out");
}
}
@Test
public void healthIndicatorDownTest() throws Exception {
try (ConfigurableApplicationContext context = singleStream()) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in", "{\"id\":\"124\"}")),
Status.DOWN, "out");
}
}
@Test
public void healthIndicatorUpMultipleKStreamsTest() throws Exception {
try (ConfigurableApplicationContext context = multipleStream()) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in2", "{\"id\":\"123\"}")),
Status.UP, "out", "out2");
}
}
@Test
public void healthIndicatorDownMultipleKStreamsTest() throws Exception {
try (ConfigurableApplicationContext context = multipleStream()) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in2", "{\"id\":\"124\"}")),
Status.DOWN, "out", "out2");
}
}
private static Status getStatusKStream(Map<String, Object> details) {
Health health = (Health) details.get("kstream");
return health != null ? health.getStatus() : Status.DOWN;
}
private static boolean waitFor(Map<String, Object> details) {
Health health = (Health) details.get("kstream");
if (health.getStatus() == Status.UP) {
Map<String, Object> moreDetails = health.getDetails();
Health kStreamHealth = (Health) moreDetails
.get("kafkaStreamsBinderHealthIndicator");
String status = (String) kStreamHealth.getDetails().get("threadState");
return status != null
&& (status.equalsIgnoreCase(KafkaStreams.State.REBALANCING.name())
|| status.equalsIgnoreCase("PARTITIONS_REVOKED")
|| status.equalsIgnoreCase("PARTITIONS_ASSIGNED")
|| status.equalsIgnoreCase(
KafkaStreams.State.PENDING_SHUTDOWN.name()));
}
return false;
}
private void receive(ConfigurableApplicationContext context,
List<ProducerRecord<Integer, String>> records, Status expected,
String... topics) throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id0",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try (Consumer<String, String> consumer = cf.createConsumer()) {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
CountDownLatch latch = new CountDownLatch(records.size());
for (ProducerRecord<Integer, String> record : records) {
ListenableFuture<SendResult<Integer, String>> future = template
.send(record);
future.addCallback(
new ListenableFutureCallback<SendResult<Integer, String>>() {
@Override
public void onFailure(Throwable ex) {
Assert.fail();
}
@Override
public void onSuccess(SendResult<Integer, String> result) {
latch.countDown();
}
});
}
latch.await(5, TimeUnit.SECONDS);
embeddedKafka.consumeFromEmbeddedTopics(consumer, topics);
KafkaTestUtils.getRecords(consumer, 1000);
TimeUnit.SECONDS.sleep(2);
checkHealth(context, expected);
}
finally {
pf.destroy();
}
}
private static void checkHealth(ConfigurableApplicationContext context,
Status expected) throws InterruptedException {
HealthIndicator healthIndicator = context.getBean("bindersHealthIndicator",
HealthIndicator.class);
Health health = healthIndicator.health();
while (waitFor(health.getDetails())) {
TimeUnit.SECONDS.sleep(2);
health = healthIndicator.health();
}
assertThat(health.getStatus()).isEqualTo(expected);
assertThat(getStatusKStream(health.getDetails())).isEqualTo(expected);
}
private ConfigurableApplicationContext singleStream() {
SpringApplication app = new SpringApplication(KStreamApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
return app.run("--server.port=0", "--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=in",
"--spring.cloud.stream.bindings.output.destination=out",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
}
private ConfigurableApplicationContext multipleStream() {
System.setProperty("logging.level.org.apache.kafka", "OFF");
SpringApplication app = new SpringApplication(AnotherKStreamApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
return app.run("--server.port=0", "--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=in",
"--spring.cloud.stream.bindings.output.destination=out",
"--spring.cloud.stream.bindings.input2.destination=in2",
"--spring.cloud.stream.bindings.output2.destination=out2",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.output2.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
"--spring.cloud.stream.kafka.streams.bindings.input2.consumer.applicationId="
+ "ApplicationHealthTest2-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class KStreamApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Object, Product> process(KStream<Object, Product> input) {
return input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
return true;
});
}
}
@EnableBinding({ KafkaStreamsProcessor.class, KafkaStreamsProcessorX.class })
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class AnotherKStreamApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Object, Product> process(KStream<Object, Product> input) {
return input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
return true;
});
}
@StreamListener("input2")
@SendTo("output2")
public KStream<Object, Product> process2(KStream<Object, Product> input) {
return input.filter((key, product) -> {
if (product.getId() != 123) {
throw new IllegalArgumentException();
}
return true;
});
}
}
public interface KafkaStreamsProcessorX {
@Input("input2")
KStream<?, ?> input();
@Output("output2")
KStream<?, ?> output();
}
static class Product {
Integer id;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -55,30 +55,34 @@ import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* This test case demonstrates a kafk-streams topology which consumes messages from
* multiple kafka topics(destinations).
*
* See
* {@link KafkaStreamsBinderMultipleInputTopicsTest#testKstreamWordCountWithStringInputAndPojoOuput}
* where the input topic names are specified as comma-separated String values for the
* property spring.cloud.stream.bindings.input.destination.
*
* @author Sarath Shyam
*
* This test case demonstrates a kafk-streams topology which consumes messages from
* multiple kafka topics(destinations).
* See {@link KafkaStreamsBinderMultipleInputTopicsTest#testKstreamWordCountWithStringInputAndPojoOuput} where
* the input topic names are specified as comma-separated String values for
* the property spring.cloud.stream.bindings.input.destination.
*
*
*/
public class KafkaStreamsBinderMultipleInputTopicsTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@@ -90,7 +94,8 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
@@ -99,45 +104,48 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=WordCountProcessorApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountProcessorApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidate(context);
} finally {
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words1");
template.sendDefault("foobar1");
template.setDefaultTopic("words2");
template.sendDefault("foobar2");
//Sleep a bit so that both the messages are processed before reading from the output topic.
//Else assertions might fail arbitrarily.
// Sleep a bit so that both the messages are processed before reading from the
// output topic.
// Else assertions might fail arbitrarily.
Thread.sleep(5000);
ConsumerRecords<String, String> received = KafkaTestUtils.getRecords(consumer);
ConsumerRecords<String, String> received = KafkaTestUtils.getRecords(consumer);
List<String> wordCounts = new ArrayList<>(2);
received.records("counts").forEach((consumerRecord) -> {
wordCounts.add((consumerRecord.value()));
});
received.records("counts")
.forEach((consumerRecord) -> wordCounts.add((consumerRecord.value())));
System.out.println(wordCounts);
assertThat(wordCounts.contains("{\"word\":\"foobar1\",\"count\":1}")).isTrue();
assertThat(wordCounts.contains("{\"word\":\"foobar2\",\"count\":1}")).isTrue();
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -145,22 +153,22 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(@Input("input") KStream<Object, String> input) {
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
input.map((k,v) -> {
input.map((k, v) -> {
System.out.println(k);
System.out.println(v);
return new KeyValue<>(k,v);
return new KeyValue<>(k, v);
});
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts"))
.toStream()
.count(Materialized.as("WordCounts")).toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key, value)));
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -57,18 +57,23 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Integer, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
//consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class.getName());
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id",
"false", embeddedKafka);
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@@ -87,26 +92,36 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidateFoo(context);
} finally {
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<Integer, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
ConsumerRecord<Integer, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-id");
assertThat(cr.key()).isEqualTo(123);
ObjectMapper om = new ObjectMapper();
@@ -121,15 +136,15 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input
.filter((key, product) -> product.getId() == 123)
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class), new JsonSerde<>(Product.class)))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store-x"))
.toStream()
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id, value));
}
}
static class Product {
@@ -143,5 +158,7 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -73,17 +73,21 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@@ -94,8 +98,10 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
@@ -105,21 +111,25 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
}
}
@Test
public void testKstreamWordCountWithInputBindingLevelApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
public void testKstreamWordCountWithInputBindingLevelApplicationId()
throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
@@ -129,43 +139,55 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
//Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
ReadOnlyWindowStore<Object, Object> store = kafkaStreams.store("foo-WordCounts", QueryableStoreTypes.windowStore());
ReadOnlyWindowStore<Object, Object> store = kafkaStreams
.store("foo-WordCounts", QueryableStoreTypes.windowStore());
assertThat(store).isNotNull();
Map streamConfigGlobalProperties = context.getBean("streamConfigGlobalProperties", Map.class);
Map streamConfigGlobalProperties = context
.getBean("streamConfigGlobalProperties", Map.class);
//Ensure that concurrency settings are mapped to number of stream task threads in Kafka Streams.
final Integer concurrency = (Integer)streamConfigGlobalProperties.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
// Ensure that concurrency settings are mapped to number of stream task
// threads in Kafka Streams.
final Integer concurrency = (Integer) streamConfigGlobalProperties
.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
sendTombStoneRecordsAndVerifyGracefulHandling();
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean, "cleanupConfig",
CleanupConfig.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isTrue();
assertThat(cleanup.cleanupOnStop()).isFalse();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
@@ -175,14 +197,17 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
private void sendTombStoneRecordsAndVerifyGracefulHandling() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault(null);
ConsumerRecords<String, String> received = consumer.poll(Duration.ofMillis(5000));
//By asserting that the received record is empty, we are ensuring that the tombstone record
//was handled by the binder gracefully.
ConsumerRecords<String, String> received = consumer
.poll(Duration.ofMillis(5000));
// By asserting that the received record is empty, we are ensuring that the
// tombstone record
// was handled by the binder gracefully.
assertThat(received.isEmpty()).isTrue();
}
finally {
@@ -200,16 +225,20 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(@Input("input") KStream<Object, String> input) {
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows)
.count(Materialized.as("foo-WordCounts"))
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))));
}
@Bean
@@ -267,6 +296,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -62,17 +62,21 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsInteractiveQueryIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@@ -91,40 +95,55 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-abc",
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidateFoo(context);
} finally {
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-id");
assertThat(cr.value().contains("Count for product with ID 123: 1")).isTrue();
ProductCountApplication.Foo foo = context.getBean(ProductCountApplication.Foo.class);
ProductCountApplication.Foo foo = context
.getBean(ProductCountApplication.Foo.class);
assertThat(foo.getProductStock(123).equals(1L));
//perform assertions on HostInfo related methods in InteractiveQueryService
InteractiveQueryService interactiveQueryService = context.getBean(InteractiveQueryService.class);
// perform assertions on HostInfo related methods in InteractiveQueryService
InteractiveQueryService interactiveQueryService = context
.getBean(InteractiveQueryService.class);
HostInfo currentHostInfo = interactiveQueryService.getCurrentHostInfo();
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port()).isEqualTo(embeddedKafka.getBrokersAsString());
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfo = interactiveQueryService.getHostInfo("prod-id-count-store", 123, new IntegerSerializer());
assertThat(hostInfo.host() + ":" + hostInfo.port()).isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfo = interactiveQueryService.getHostInfo("prod-id-count-store",
123, new IntegerSerializer());
assertThat(hostInfo.host() + ":" + hostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
HostInfo hostInfoFoo = interactiveQueryService.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
HostInfo hostInfoFoo = interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
assertThat(hostInfoFoo).isNull();
}
@@ -137,13 +156,13 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
@SuppressWarnings("deprecation")
public KStream<?, String> process(KStream<Object, Product> input) {
return input
.filter((key, product) -> product.getId() == 123)
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(), new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store"))
.toStream()
.map((key, value) -> new KeyValue<>(null, "Count for product with ID 123: " + value));
.groupByKey(Serialized.with(new Serdes.IntegerSerde(),
new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(null,
"Count for product with ID 123: " + value));
}
@Bean
@@ -152,6 +171,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
static class Foo {
InteractiveQueryService interactiveQueryService;
Foo(InteractiveQueryService interactiveQueryService) {
@@ -159,12 +179,15 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
public Long getProductStock(Integer id) {
ReadOnlyKeyValueStore<Object, Object> keyValueStore =
interactiveQueryService.getQueryableStore("prod-id-count-store", QueryableStoreTypes.keyValueStore());
ReadOnlyKeyValueStore<Object, Object> keyValueStore = interactiveQueryService
.getQueryableStore("prod-id-count-store",
QueryableStoreTypes.keyValueStore());
return (Long) keyValueStore.get(id);
}
}
}
static class Product {
@@ -178,5 +201,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -69,26 +69,32 @@ import static org.mockito.Mockito.verify;
public abstract class KafkaStreamsNativeEncodingDecodingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@SpyBean
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate conversionDelegate;
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
public static void setUp() {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port","0");
System.setProperty("spring.jmx.enabled","false");
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@@ -101,44 +107,54 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=NativeEncodingDecodingEnabledTests-abc"
},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class NativeEncodingDecodingEnabledTests extends KafkaStreamsNativeEncodingDecodingTests {
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-abc" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class NativeEncodingDecodingEnabledTests
extends KafkaStreamsNativeEncodingDecodingTests {
@Test
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(KafkaStreamsMessageConversionDelegate, never()).serializeOnOutbound(any(KStream.class));
verify(KafkaStreamsMessageConversionDelegate, never()).deserializeOnInbound(any(Class.class), any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.NONE,
properties = "spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=NativeEncodingDecodingEnabledTests-xyz")
public static class NativeEncodingDecodingDisabledTests extends KafkaStreamsNativeEncodingDecodingTests {
// @checkstyle:off
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = "spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-xyz")
// @checkstyle:on
public static class NativeEncodingDecodingDisabledTests
extends KafkaStreamsNativeEncodingDecodingTests {
@Test
public void test() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(KafkaStreamsMessageConversionDelegate).serializeOnOutbound(any(KStream.class));
verify(KafkaStreamsMessageConversionDelegate).deserializeOnInbound(any(Class.class), any(KStream.class));
verify(conversionDelegate).serializeOnOutbound(any(KStream.class));
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -155,14 +171,15 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
public KStream<?, String> process(KStream<Object, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows)
.count(Materialized.as("foo-WordCounts-x"))
.toStream()
.map((key, value) -> new KeyValue<>(null, "Count for " + key.key() + " : " + value));
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
import org.apache.kafka.streams.kstream.KStream;
@@ -50,9 +49,11 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsStateStoreIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testKstreamStateStore() throws Exception {
@@ -62,31 +63,75 @@ public class KafkaStreamsStateStoreIntegrationTests {
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
Thread.sleep(2000);
receiveAndValidateFoo(context);
} catch (Exception e) {
}
catch (Exception e) {
throw e;
} finally {
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception {
@Test
public void testSameStateStoreIsCreatedOnlyOnceWhenMultipleInputBindingsArePresent() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplicationWithMultipleInputBindings.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input1.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
Thread.sleep(2000);
// We are not particularly interested in querying the state store here, as that is verified by the other test
// in this class. This test verifies that the same store is not attempted to be created by multiple input bindings.
// Normally, that will cause an exception to be thrown. However by not getting any exceptions, we are verifying
// that the binder is handling it appropriately.
//For more info, see this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/551
}
catch (Exception e) {
throw e;
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foobar");
template.sendDefault("{\"id\":\"123\"}");
Thread.sleep(1000);
//assertions
ProductCountApplication productCount = context.getBean(ProductCountApplication.class);
// assertions
ProductCountApplication productCount = context
.getBean(ProductCountApplication.class);
WindowStore<Object, String> state = productCount.state;
assertThat(state != null).isTrue();
assertThat(state.name()).isEqualTo("mystate");
@@ -99,33 +144,71 @@ public class KafkaStreamsStateStoreIntegrationTests {
public static class ProductCountApplication {
WindowStore<Object, String> state;
boolean processed;
@StreamListener("input")
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@SuppressWarnings({"deprecation", "unchecked"})
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(KStream<Object, Product> input) {
input
.process(() -> new Processor<Object, Product>() {
input.process(() -> new Processor<Object, Product>() {
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
}
@Override
public void close() {
if (state != null) {
state.close();
}
}, "mystate");
}
}, "mystate");
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class ProductCountApplicationWithMultipleInputBindings {
WindowStore<Object, String> state;
boolean processed;
@StreamListener
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(@Input("input1")KStream<Object, Product> input, @Input("input2")KStream<Object, Product> input2) {
input.process(() -> new Processor<Object, Product>() {
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
}
}
}, "mystate");
//simple use of input2, we are not using input2 for anything other than triggering some test behavior.
input2.foreach((key, value) -> { });
}
}
@@ -140,6 +223,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
public void setId(Integer id) {
this.id = id;
}
}
interface KafkaStreamsProcessorX {
@@ -147,4 +231,13 @@ public class KafkaStreamsStateStoreIntegrationTests {
@Input("input")
KStream<?, ?> input();
}
interface KafkaStreamsProcessorY {
@Input("input1")
KStream<?, ?> input1();
@Input("input2")
KStream<?, ?> input2();
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -60,17 +60,21 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@@ -89,19 +93,24 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidateFoo(context);
//Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process",
StreamsBuilderFactoryBean.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean, "cleanupConfig",
CleanupConfig.class);
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-ProductCountApplication-process", StreamsBuilderFactoryBean.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();
assertThat(cleanup.cleanupOnStop()).isTrue();
}
@@ -110,13 +119,16 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-id");
assertThat(cr.value().contains("Count for product with ID 123: 1")).isTrue();
}
@@ -128,15 +140,16 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
@SendTo("output")
public KStream<Integer, String> process(KStream<Object, Product> input) {
return input
.filter((key, product) -> product.getId() == 123)
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class), new JsonSerde<>(Product.class)))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store"))
.toStream()
.map((key, value) -> new KeyValue<>(key.key().id, "Count for product with ID 123: " + value));
.count(Materialized.as("id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id,
"Count for product with ID 123: " + value));
}
}
static class Product {
@@ -150,5 +163,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -0,0 +1,105 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.stereotype.Component;
import static org.assertj.core.api.Assertions.assertThat;
public class MultiProcessorsWithSameNameTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesArePresent() {
SpringApplication app = new SpringApplication(
MultiProcessorsWithSameNameTests.WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.input-2.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean1 = context
.getBean("&stream-builder-Foo-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean1).isNotNull();
StreamsBuilderFactoryBean streamsBuilderFactoryBean2 = context
.getBean("&stream-builder-Bar-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean2).isNotNull();
}
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@Component
static class Foo {
@StreamListener
public void process(@Input("input-1") KStream<Object, String> input) {
}
}
//Second class with a stub processor that has the same name as above ("process")
@Component
static class Bar {
@StreamListener
public void process(@Input("input-2") KStream<Object, String> input) {
}
}
}
interface KafkaStreamsProcessorX {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
KStream<?, ?> input2();
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -66,21 +66,26 @@ import static org.assertj.core.api.Assertions.assertThat;
public class PerRecordAvroContentTypeTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "received-sensors");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"received-sensors");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, byte[]> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("avro-ct-test", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("avro-ct-test",
"false", embeddedKafka);
//Receive the data as byte[]
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
// Receive the data as byte[]
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, byte[]> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, byte[]> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "received-sensors");
}
@@ -102,12 +107,15 @@ public class PerRecordAvroContentTypeTests {
"--spring.cloud.stream.bindings.output.contentType=application/avro",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=per-record-avro-contentType-test",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
//Use a custom avro test serializer
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, TestAvroSerializer.class);
DefaultKafkaProducerFactory<Integer, Sensor> pf = new DefaultKafkaProducerFactory<>(senderProps);
// Use a custom avro test serializer
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
TestAvroSerializer.class);
DefaultKafkaProducerFactory<Integer, Sensor> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, Sensor> template = new KafkaTemplate<>(pf, true);
@@ -117,26 +125,32 @@ public class PerRecordAvroContentTypeTests {
sensor.setAcceleration(random.nextFloat() * 10);
sensor.setVelocity(random.nextFloat() * 100);
sensor.setTemperature(random.nextFloat() * 50);
//Send with avro content type set.
// Send with avro content type set.
Message<?> message = MessageBuilder.withPayload(sensor)
.setHeader("contentType", "application/avro")
.build();
.setHeader("contentType", "application/avro").build();
template.setDefaultTopic("sensors");
template.send(message);
//Serialized byte[] ^^ is received by the binding process and deserialzed it using avro converter.
//Then finally, the data will be output to a return topic as byte[] (using the same avro converter).
// Serialized byte[] ^^ is received by the binding process and deserialzed
// it using avro converter.
// Then finally, the data will be output to a return topic as byte[]
// (using the same avro converter).
//Receive the byte[] from return topic
ConsumerRecord<String, byte[]> cr = KafkaTestUtils.getSingleRecord(consumer, "received-sensors");
// Receive the byte[] from return topic
ConsumerRecord<String, byte[]> cr = KafkaTestUtils
.getSingleRecord(consumer, "received-sensors");
final byte[] value = cr.value();
//Convert the byte[] received back to avro object and verify that it is the same as the one we sent ^^.
// Convert the byte[] received back to avro object and verify that it is
// the same as the one we sent ^^.
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
Message<?> receivedMessage = MessageBuilder.withPayload(value)
.setHeader("contentType", MimeTypeUtils.parseMimeType("application/avro")).build();
Sensor messageConverted = (Sensor)avroSchemaMessageConverter.fromMessage(receivedMessage, Sensor.class);
.setHeader("contentType",
MimeTypeUtils.parseMimeType("application/avro"))
.build();
Sensor messageConverted = (Sensor) avroSchemaMessageConverter
.fromMessage(receivedMessage, Sensor.class);
assertThat(messageConverted).isEqualTo(sensor);
}
finally {
@@ -152,7 +166,8 @@ public class PerRecordAvroContentTypeTests {
@StreamListener
@SendTo("output")
public KStream<?, Sensor> process(@Input("input") KStream<Object, Sensor> input) {
//return the same Sensor object unchanged so that we can do test verifications
// return the same Sensor object unchanged so that we can do test
// verifications
return input.map(KeyValue::new);
}
@@ -161,5 +176,7 @@ public class PerRecordAvroContentTypeTests {
public MessageConverter sensorMessageConverter() throws IOException {
return new AvroSchemaMessageConverter();
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -62,52 +62,18 @@ import static org.assertj.core.api.Assertions.assertThat;
public class StreamToGlobalKTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "enriched-order");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(@Input("input") KStream<Long, Order> ordersStream,
@Input("input-x") GlobalKTable<Long, Customer> customers,
@Input("input-y") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(customers,
(orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder
.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
SpringApplication app = new SpringApplication(
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
@@ -119,27 +85,49 @@ public class StreamToGlobalKTableJoinIntegrationTests {
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-y.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$EnrichedOrderSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration"
+ ".StreamToGlobalKTableJoinIntegrationTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams."
+ "integration.StreamToGlobalKTableJoinIntegrationTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration"
+ ".StreamToGlobalKTableJoinIntegrationTests$EnrichedOrderSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
CustomerSerde customerSerde = new CustomerSerde();
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, customerSerde.serializer().getClass());
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
customerSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer, true);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer,
true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
@@ -147,13 +135,18 @@ public class StreamToGlobalKTableJoinIntegrationTests {
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
Map<String, Object> senderPropsProduct = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
ProductSerde productSerde = new ProductSerde();
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, productSerde.serializer().getClass());
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
productSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct, true);
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct,
true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
@@ -162,12 +155,16 @@ public class StreamToGlobalKTableJoinIntegrationTests {
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
Map<String, Object> senderPropsOrder = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
OrderSerde orderSerde = new OrderSerde();
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, orderSerde.serializer().getClass());
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
orderSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(senderPropsOrder);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
@@ -178,13 +175,19 @@ public class StreamToGlobalKTableJoinIntegrationTests {
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
EnrichedOrderSerde enrichedOrderSerde = new EnrichedOrderSerde();
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, enrichedOrderSerde.deserializer().getClass());
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
enrichedOrderSerde.deserializer().getClass());
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
@@ -193,12 +196,14 @@ public class StreamToGlobalKTableJoinIntegrationTests {
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils.getRecords(consumer);
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < 5 && (System.currentTimeMillis() - start) < 30000);
}
while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
@@ -206,13 +211,16 @@ public class StreamToGlobalKTableJoinIntegrationTests {
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders.get(i);
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders
.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i)).isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i)).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i))
.isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i))
.isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
@@ -222,9 +230,49 @@ public class StreamToGlobalKTableJoinIntegrationTests {
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(
@Input("input") KStream<Long, Order> ordersStream,
@Input("input-x") GlobalKTable<Long, Customer> customers,
@Input("input-y") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
customers, (orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
@@ -242,6 +290,7 @@ public class StreamToGlobalKTableJoinIntegrationTests {
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
@@ -255,6 +304,7 @@ public class StreamToGlobalKTableJoinIntegrationTests {
public void setName(String name) {
this.name = name;
}
}
static class Product {
@@ -268,12 +318,15 @@ public class StreamToGlobalKTableJoinIntegrationTests {
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
@@ -299,10 +352,13 @@ public class StreamToGlobalKTableJoinIntegrationTests {
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
@@ -313,10 +369,23 @@ public class StreamToGlobalKTableJoinIntegrationTests {
long productId() {
return order.getProductId();
}
}
public static class OrderSerde extends JsonSerde<Order> {
}
public static class CustomerSerde extends JsonSerde<Customer> {
}
public static class ProductSerde extends JsonSerde<Product> {
}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {
}
public static class OrderSerde extends JsonSerde<Order> {}
public static class CustomerSerde extends JsonSerde<Customer> {}
public static class ProductSerde extends JsonSerde<Product> {}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -65,47 +65,28 @@ import static org.assertj.core.api.Assertions.assertThat;
public class StreamToTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "output-topic-1", "output-topic-2");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"output-topic-1", "output-topic-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@StreamListener
@SendTo("output")
public KStream<String, Long> process(@Input("input") KStream<String, Long> userClicksStream,
@Input("input-x") KTable<String, String> userRegionsTable) {
return userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ? "UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.toStream();
}
}
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
@Input("input-x")
KTable<?, ?> inputX();
}
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
@@ -117,35 +98,47 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"),
new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"),
new KeyValue<>("alice", "europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"),
new KeyValue<>("fang", "asia")
);
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
@@ -155,21 +148,19 @@ public class StreamToTableJoinIntegrationTests {
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L),
new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L),
new KeyValue<>("fang", 99L)
);
new KeyValue<>("alice", 13L), new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L), new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L), new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L), new KeyValue<>("fang", 99L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
@@ -178,22 +169,24 @@ public class StreamToTableJoinIntegrationTests {
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L),
new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L)
);
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L));
//Verify that we receive the expected data
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils.getRecords(consumer);
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion.add(new KeyValue<>(record.key(), record.value()));
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000);
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
@@ -204,16 +197,22 @@ public class StreamToTableJoinIntegrationTests {
}
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest()
throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
@@ -221,30 +220,27 @@ public class StreamToTableJoinIntegrationTests {
// binding (which is set to earliest below).
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L)
);
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-2");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
//Thread.sleep(10000L);
// Thread.sleep(10000L);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
@@ -255,36 +251,47 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"),
new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"),
new KeyValue<>("alice", "europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"),
new KeyValue<>("fang", "asia")
);
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-2");
@@ -292,45 +299,41 @@ public class StreamToTableJoinIntegrationTests {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L),
new KeyValue<>("fang", 99L)
);
new KeyValue<>("bob", 4L), new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L), new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L), new KeyValue<>("fang", 99L));
for (KeyValue<String, Long> keyValue : userClicks1) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L),
new KeyValue<>("europe", 56L),
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 56L),
new KeyValue<>("asia", 124L),
//1000 alice entries which were there in the topic before the consumer started.
//Since we set the startOffset to earliest for the topic, it will read them,
//but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L)
);
// 1000 alice entries which were there in the topic before the
// consumer started.
// Since we set the startOffset to earliest for the topic, it will
// read them,
// but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L));
//Verify that we receive the expected data
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils.getRecords(consumer);
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion.add(new KeyValue<>(record.key(), record.value()));
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000);
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
assertThat(count).isEqualTo(expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
@@ -340,12 +343,76 @@ public class StreamToTableJoinIntegrationTests {
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(
TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@StreamListener
@SendTo("output")
public KStream<String, Long> process(
@Input("input") KStream<String, Long> userClicksStream,
@Input("input-x") KTable<String, String> userRegionsTable) {
return userClicksStream
.leftJoin(userRegionsTable,
(clicks, region) -> new RegionWithClicks(
region == null ? "UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.toStream();
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class TrivialKTableApp {
@StreamListener("input-y")
public void process(KTable<String, String> inputTable) {
inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
@Input("input-x")
KTable<?, ?> inputX();
}
interface KafkaStreamsProcessorY {
@Input("input-y")
KTable<?, ?> inputY();
}
/**
* Tuple for a region and its associated number of clicks.
*/
private static final class RegionWithClicks {
private final String region;
private final long clicks;
RegionWithClicks(String region, long clicks) {
@@ -368,4 +435,5 @@ public class StreamToTableJoinIntegrationTests {
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -62,17 +62,21 @@ import static org.assertj.core.api.Assertions.assertThat;
public class WordCountMultipleBranchesIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts","foo","bar");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@@ -82,6 +86,66 @@ public class WordCountMultipleBranchesIntegrationTests {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@@ -91,23 +155,26 @@ public class WordCountMultipleBranchesIntegrationTests {
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo({"output1","output2","output3"})
@SendTo({ "output1", "output2", "output3" })
@SuppressWarnings("unchecked")
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(timeWindows)
.count(Materialized.as("WordCounts-multi"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))))
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value).windowedBy(timeWindows)
.count(Materialized.as("WordCounts-multi")).toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
interface KStreamProcessorX {
@@ -123,56 +190,7 @@ public class WordCountMultipleBranchesIntegrationTests {
@Output("output3")
KStream<?, ?> output3();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidate(context);
} finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
static class WordCount {
@@ -223,6 +241,7 @@ public class WordCountMultipleBranchesIntegrationTests {
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -29,13 +29,13 @@ import org.springframework.messaging.support.MessageBuilder;
/**
* Custom avro serializer intended to be used for testing only.
*
* @author Soby Chacko
*
* @param <S> Target type to serialize
* @author Soby Chacko
*/
public class TestAvroSerializer<S> implements Serializer<S> {
public TestAvroSerializer() {}
public TestAvroSerializer() {
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
@@ -49,13 +49,14 @@ public class TestAvroSerializer<S> implements Serializer<S> {
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, "application/avro");
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = avroSchemaMessageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
return (byte[])payload;
final Object payload = avroSchemaMessageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2018 the original author or authors.
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -42,7 +42,7 @@ public class CompositeNonNativeSerdeTest {
@Test
@SuppressWarnings("unchecked")
public void testCompositeNonNativeSerdeUsingAvroContentType(){
public void testCompositeNonNativeSerdeUsingAvroContentType() {
Random random = new Random();
Sensor sensor = new Sensor();
sensor.setId(UUID.randomUUID().toString() + "-v1");
@@ -52,18 +52,22 @@ public class CompositeNonNativeSerdeTest {
List<MessageConverter> messageConverters = new ArrayList<>();
messageConverters.add(new AvroSchemaMessageConverter());
CompositeMessageConverterFactory compositeMessageConverterFactory =
new CompositeMessageConverterFactory(messageConverters, new ObjectMapper());
CompositeNonNativeSerde compositeNonNativeSerde = new CompositeNonNativeSerde(compositeMessageConverterFactory);
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
messageConverters, new ObjectMapper());
CompositeNonNativeSerde compositeNonNativeSerde = new CompositeNonNativeSerde(
compositeMessageConverterFactory);
Map<String, Object> configs = new HashMap<>();
configs.put("valueClass", Sensor.class);
configs.put("contentType", "application/avro");
compositeNonNativeSerde.configure(configs, false);
final byte[] serialized = compositeNonNativeSerde.serializer().serialize(null, sensor);
final byte[] serialized = compositeNonNativeSerde.serializer().serialize(null,
sensor);
final Object deserialized = compositeNonNativeSerde.deserializer().deserialize(null, serialized);
final Object deserialized = compositeNonNativeSerde.deserializer()
.deserialize(null, serialized);
assertThat(deserialized).isEqualTo(sensor);
}
}

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.RC1</version>
<version>3.0.0.M1</version>
</parent>
<dependencies>

Some files were not shown because too many files have changed in this diff Show More