Compare commits

..

174 Commits

Author SHA1 Message Date
buildmaster
e7487b9ada Update SNAPSHOT to 3.1.0.M1 2020-04-07 17:18:40 +00:00
Oleg Zhurakousky
064f5b6611 Updated spring-kafka version to 2.4.4 2020-04-07 19:05:46 +02:00
Oleg Zhurakousky
ff859c5859 update maven wrapper 2020-04-07 18:45:35 +02:00
Gary Russell
445eabc59a Transactional Producer Doc Polishing
- explain txId requirements for producer-initiated transactions
2020-04-07 11:16:14 -04:00
Soby Chacko
cc5d1b1aa6 GH:851 Kafka Streams topology visualization
Add Boot actuator endpoints for topology visualization.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/851
2020-03-25 13:02:57 +01:00
Soby Chacko
db896532e6 Offset commit when DLQ is enabled and manual ack (#871)
* Offset commit when DLQ is enabled and manual ack

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
2020-03-24 16:28:39 -04:00
Gary Russell
07a740a5b5 GH-861: Add transaction manager bean override
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/861

Add docs
2020-03-19 17:03:36 -04:00
Serhii Siryi
65f8cc5660 InteractiveQueryService API changes
* Add InteractiveQueryService.getAllHostsInfo to fetch metadata
   of all instances that handles specific store.
 * Test to verify this new API (in the existing InteractiveQueryService tests).
2020-03-05 13:24:32 -05:00
Soby Chacko
90a47a675f Update Kafka Streams docs
Fix missing property prefix in StreamListener based applicationId settings.
2020-03-05 11:40:10 -05:00
Soby Chacko
9d212024f8 Updates for 3.1.0
spring-cloud-build to 3.0.0 snapshot
spring kafka to 2.4.x snapshot
SIK 3.2.1

Remove a test that has behavior inconsistent with new changes in Spring Kafka 2.4
where all error handlers have isAckAfterHandle() true by default. The test for
auto commit offset on error without dlq was expecting this acknowledgement to not
to occur. If applications need to have the ack turned off on error, they should provide a
container customizer where it sets the ack to false. Since this is not a binder
concern, we are removing the test testDefaultAutoCommitOnErrorWithoutDlq.

Cleaning up in Kafka Streams binder tests.
2020-03-04 18:37:08 -05:00
Gary Russell
d594bab4cf GH-853: Don't propagate out "internal" headers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/853
2020-02-27 13:37:58 -05:00
Oleg Zhurakousky
e46bd1f844 Update POMs for Ivyland 2020-02-14 11:28:28 +01:00
buildmaster
5a8aad502f Bumping versions to 3.0.3.BUILD-SNAPSHOT after release 2020-02-13 08:31:42 +00:00
buildmaster
b84a0fdfc6 Going back to snapshots 2020-02-13 08:31:41 +00:00
buildmaster
e3fcddeab6 Update SNAPSHOT to 3.0.2.RELEASE 2020-02-13 08:30:55 +00:00
Soby Chacko
1b6f9f5806 KafkaStreamsBinderMetrics throws ClassCastException
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/814

Fixing an erroneous cast.
2020-02-12 11:53:19 -05:00
Oleg Zhurakousky
f397f7c313 Merge pull request #845 from sobychacko/gh-844
Kafka streams concurrency with multiple bindings
2020-02-12 16:38:53 +01:00
Oleg Zhurakousky
7e0b923c05 Merge pull request #843 from sobychacko/gh-842
Kafka Streams binder health indicator issues
2020-02-12 16:37:11 +01:00
Soby Chacko
0dc5196cb2 Fix typo in a class name 2020-02-11 17:55:50 -05:00
Soby Chacko
1cc50c1a40 Kafka streams concurrency with multiple bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844

Fixing an issue where the default concurrency settings are overridden when
there are multiple bindings present with non-default concurrency settings.
2020-02-11 13:21:58 -05:00
Adriano Scheffer
0b3d508fe2 Always call value serde configure method
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836

Remove redundant call to serde configure

Closes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836
2020-02-10 19:57:37 -05:00
Soby Chacko
2acce00c74 Kafka Streams binder health indicator issues
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/842

When a health indicator is run against a topic with multiple partitions,
the Kafka Streams binder overwrites the information. Addressing this issue.
2020-02-10 19:42:27 -05:00
Łukasz Kamiński
ce0376ad86 GH-830: Enable usage of authorizationExceptionRetryInterval
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/830

Enable binder configuration of authorizationExceptionRetryInterval through properties
2020-01-17 11:18:02 -05:00
Soby Chacko
25ac3b75e3 Remove log4j from Kafka Streams binder 2020-01-07 17:05:00 -05:00
Gary Russell
6091a1de51 Provisioner: Fix compiler warnings
- logger is static
- missing javadocs
2020-01-06 13:52:15 -05:00
Soby Chacko
5cfce42d2e Kafka Streams multi binder configuration
In the mutli binder scenario, make KafkaBinderConfigurationProperties conditional
so that it only creates such a bean in the correspodig multi binder context.
For normal cases, KafkaBinderConfigurationProperties bean is created by the main context.
2019-12-23 18:59:16 -05:00
buildmaster
349fc85b4b Bumping versions to 3.0.2.BUILD-SNAPSHOT after release 2019-12-18 18:12:34 +00:00
buildmaster
a4762ad28b Going back to snapshots 2019-12-18 18:12:34 +00:00
buildmaster
bdd4b3e28b Update SNAPSHOT to 3.0.1.RELEASE 2019-12-18 18:11:50 +00:00
Soby Chacko
3ff99da014 Update parent s-c-build to 2.2.1.RELEASE 2019-12-18 12:07:39 -05:00
Soby Chacko
c7c05fe3c2 Docs for topic patterns in Kafka Streams binder 2019-12-17 18:43:45 -05:00
문찬용
4b50f7c2f5 Fix typo 2019-12-17 18:41:13 -05:00
stoetti
88c5aa0969 Supporting topic pattern in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/820

Fix checkstyle issues
2019-12-17 18:38:36 -05:00
Oleg Zhurakousky
d5a72a29d0 GH-815 polishing
Resolves #816
2019-12-17 10:55:47 +01:00
Soby Chacko
8c3cb8230b Addressing mulit binder issues with Kafka Streams
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/815

There was an issue with Kafka Streams multi binders in which the properties were not scanned
properly. The last configuation is always won, wiping out any prevous enviroment properties.
Addressing this issue by properly keeping KafkaBinderConfigurationProperties per multi binder
environment and explicity invoking Boot properties binding on them.

Adding test to verify.
2019-12-12 18:08:53 -05:00
Soby Chacko
70c1ce8970 Update image for spring.io kafka streams blog 2019-12-02 11:22:39 -05:00
Soby Chacko
1d8a4e67d2 Update image for spring.io kafka streams blog 2019-12-02 11:16:29 -05:00
buildmaster
bbfc776395 Bumping versions to 3.0.1.BUILD-SNAPSHOT after release 2019-11-22 14:51:01 +00:00
buildmaster
4e9ed30948 Going back to snapshots 2019-11-22 14:51:01 +00:00
buildmaster
34fd7a6a7a Update SNAPSHOT to 3.0.0.RELEASE 2019-11-22 14:49:08 +00:00
Oleg Zhurakousky
b23b42d874 Ignoring intermittently failing test 2019-11-22 15:36:23 +01:00
Soby Chacko
cf59cfcf12 Kafka Streams docs cleanup 2019-11-20 18:25:12 -05:00
Soby Chacko
02a4fcb144 AdminClient caching in KafkaStreams health check 2019-11-20 17:36:14 -05:00
Oleksii Mukas
6ac9c0ed23 Closing adminClient prematurely
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/803
2019-11-20 17:34:09 -05:00
Soby Chacko
78ff4f1a70 KafkaBindingProperties has no documentation (#802)
* KafkaBindingProperties has no documentation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/739

No popup help in IDE due to missing javadocs in KafkaBindingProperties,
KafkaConsumerProperties and KafkaProducerProperties.

Some IDEs currently only show javadocs on getter methods from POJOs
used insdide a map. Therefore, some javadocs are duplicated between
fields and getter methods.

* Addressing PR review comments

* Addressing PR review comments
2019-11-14 12:50:15 -05:00
Soby Chacko
637ec2e55d Kafka Streams binder docs improvements
Adding docs for production exception handlers.
Updating configuration options section.
2019-11-13 15:19:41 -05:00
Soby Chacko
6effd58406 Adding docs for global state stores 2019-11-13 11:19:36 -05:00
Soby Chacko
7b8f0dcab7 Kafka Streams - DLQ control per consumer binding (#801)
* Kafka Streams - DLQ control per consumer binding

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/800

* Fine-grained DLQ control and deserialization exception handlers per input binding

* Deprecate KafkaStreamsBinderConfigurationProperties.SerdeError in preference to the
  new enum `KafkaStreamsBinderConfigurationProperties.DeserializationExceptionHandler`
  based properties

* Add tests, modifying docs

* Addressing PR review comments
2019-11-13 09:41:02 -05:00
Gary Russell
88912b8d6b GH-715: Add retry, dlq for transactional binders
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/715

When using transactions, binder retry, dlq cannot be used because the retry
runs within the transaction, which is undesirable if there is another resource
involved; also publishing to the DLQ could be rolled back.

Use the retry properties to configure an `AfterRollbackProcessor` to perform the
retry and DLQ publishing after the transaction has rolled back.

Add docs; move the container customizer invocation to the end.
2019-11-12 14:53:12 -05:00
Soby Chacko
34b0945d43 Changing the order of calling customizer
In the Kafka Streams binder, StreamsBuilderFactoryBean customzier was being called
prematurely before the object is created. Fixing this issue.

Add a test to verify
2019-11-11 17:21:46 -05:00
Gary Russell
278ba795d0 Configure consumer customizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1841

* Add test
2019-11-11 11:34:41 -05:00
buildmaster
bf30ecdea1 Going back to snapshots 2019-11-08 16:29:53 +00:00
buildmaster
464ce685bb Update SNAPSHOT to 3.0.0.RC2 2019-11-08 16:29:19 +00:00
Oleg Zhurakousky
54fa9a638d Added flatten plug-in 2019-11-08 16:50:10 +01:00
Soby Chacko
fefd9a3bd6 Cleanup in Kafka Streams docs 2019-11-07 18:07:27 -05:00
Soby Chacko
8e26d5e170 Fix typos in the previous PR commit 2019-11-06 21:23:19 -05:00
Soby Chacko
ac6bdc976e Reintroduce BinderHeaderMapper (#797)
* Reintrouce BinderHeaderMapper

Provide a  custom header mapper that is identical to the DefaultKafkaHeaderMapper in Spring Kafka.
This is to address some interoperability issues between Spring Cloud Stream 3.0.x and 2.x apps,
where mime types in the header are not de-serialized properly. This custom BinderHeaderMapper
will be eventually deprecated and removed once the fixes are in Spring Kafka.

Resolves #796

* Addressing review

* polishing
2019-11-06 21:14:20 -05:00
Ramesh Sharma
65386f6967 Fixed log message to print as value vs key serde 2019-11-06 09:43:08 -05:00
Oleg Zhurakousky
81c453086a Merge pull request #794 from sobychacko/gh-752
Revise docs
2019-11-06 09:11:16 +01:00
Soby Chacko
0ddd9f8f64 Revise docs
Update kafka-clients version
Revise Kafka Streams docs

Resolves #752
2019-11-05 19:49:07 -05:00
Gary Russell
d0fe596a9e GH-790: Upgrade SIK to 3.2.1
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/790
2019-11-05 11:41:40 -05:00
Soby Chacko
062bbc1cc3 Add null check for KafkaStreamsBinderMetrics 2019-11-01 18:42:30 -04:00
Soby Chacko
bc1936eb28 KafkaStreams binder metrics - duplicate entries
When the same metric name is repeated, there are some registry implementations such as the
micrometer Prometheus registry fail to register the duplicate entry. Fixing this issue by
restricting the duplicate metric names not to be registered.

Also, address an issue with multiple processors and metrics in the same application
by prepending the application ID of the Kafka Streams processor in the metric name itself.

Resolves #788
2019-11-01 15:55:30 -04:00
Soby Chacko
e2f1092173 Kafka Streams binder health indicator improvements
Caching AdminClient in Kafk Streams binder health indicator.

Resolves #791
2019-10-31 16:15:56 -04:00
Soby Chacko
06e5739fbd Addressing bugs reported by Sonarqube
Resolves #747
2019-10-25 16:42:02 -04:00
Soby Chacko
ad8e67fdc5 Ignore a test where there is a race condition 2019-10-24 12:00:22 -04:00
Soby Chacko
a3fb4cc3b3 Fix state store registration issue
Fixing the issue where state store is registered for each input binding
when multiple input bindings are present.

Resolves #785
2019-10-24 11:33:31 -04:00
Soby Chacko
7f09baf72d Enable customization on StreamsBuilderFactoryBean
Spring Kafka provides a StreamsBuilderFactoryBeanCustomizer. Use this in the binder so that the
applicatons can plugin in such a bean to further customize the StreamsBuilderFactoryBean and KafkaStreams.

Resolves #784
2019-10-24 09:35:37 -04:00
Soby Chacko
28a02cda4f Multiple functions and definition property
In order to make Kafka Streams binder based function apps more consistent
with the wider functional support in ScSt, it should require the proprety
spring.cloud.stream.fucntion.definition to signal which functions to activate.

Resolves #783
2019-10-24 01:01:34 -04:00
Soby Chacko
f96a9f884c Custom partitioner for Kafka Streams producer
* Allow the ability to plug in custom StreamPartitioner on the Kafka Streams producer.
* Fix a bug where the overriding of native encoding/decoding settings by the binder was not
  workging properly. This fix is done by providing a custom ConfigurationPropertiesBindHandlerAdvisor.
* Add test to verify

Resolves #782
2019-10-23 20:12:08 -04:00
Soby Chacko
a9020368e5 Remove the usage of BindingProvider 2019-10-23 16:32:07 -04:00
Gary Russell
ca9296dbd2 GH-628: Add dlqPartitions property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/628

Allow users to specify the number of partitions in the dead letter topic.

If the property is set to 1 and the `minPartitionCount` binder property is 1,
override the default behavior and always publish to partition 0.
2019-10-23 15:33:20 -04:00
Soby Chacko
e8d202404b Custom timestamp extractor per binding
Currenlty there is no way to pass a custom timestamp extractor
per consumer binding in Kafka Streams binder. Adding this ability.

Resolves #640
2019-10-23 15:31:17 -04:00
Artem Bilan
5794fb983c Add ProducerMessageHandlerCustomizer support
Related to https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/265
and https://github.com/spring-cloud/spring-cloud-stream/pull/1828
2019-10-23 14:30:20 -04:00
Oleg Zhurakousky
1ce1d7918f Merge pull request #777 from sobychacko/gh-774
Null value in outbound KStream
2019-10-23 16:54:21 +02:00
Oleg Zhurakousky
1264434871 Merge pull request #773 from sobychacko/gh-552
Kafka/Streams binder health indicator improvements
2019-10-23 16:53:37 +02:00
Massimiliano Poggi
e4fed0a52d Added qualifiers to CompositeMessageConverterBean injections
When more than one CompositeMessageConverter bean was defined in the same ApplicationContext,
not having the qualifiers on the injection points was causing the application to fail during
instantiation due to bean conflicts being raised. The injection points for CompositeMessageConverter
have been marked with the appropriate qualifier to inject the Spring Cloud CompositeMessageConverter.

Resolves #775
2019-10-22 17:17:03 -04:00
Soby Chacko
05e2918bc0 Addressing PR review comments 2019-10-22 17:07:51 -04:00
Gary Russell
6dadf0c104 GH-763: Add DlqPartitionFunction
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/763

Allow users to select the DLQ partition based on

* consumer group
* consumer record (which includes the original topic/partition)
* exception

Kafka Streams DLQ docs changes
2019-10-22 16:10:29 -04:00
Soby Chacko
f4dcf5100c Null value in outbound KStream
When native encoding is disabled, the conversion on outbound fails if the record
value is a null. Handle this scenario more graceful by allowing the record
to be sent downstream by skipping the conversion.

Resolves #774
2019-10-22 12:45:14 -04:00
Soby Chacko
b8eb41cb87 Kafka/Streams binder health indicator improvements
When both binders are present, there were ambiguities in the way the binders
were reporting health status. If one binder does not have any bindings, the
total health status was reported as down. Fixing these ambiguiltes as below.

If both binders have bindings present and Kafka broker is reachable, report
the status as UP and the associated details. If one of the binder does not
have bindings, but Kafka broker can be reached, then that particular binder's
status will be marked as UNKNOWN and the overall status is reported as UP.
If Kafka broker is down, then both binders are reported as DOWN and
the overall status is marked as DOWN.

Resolves #552
2019-10-21 21:59:11 -04:00
Soby Chacko
82cfd6d176 Making KafkaStreamsMetrics object Nullable 2019-10-18 21:41:16 -04:00
Tenzin Chemi
6866eef8b0 Update overview.adoc 2019-10-18 15:25:32 -04:00
Soby Chacko
b833a9f371 Fix Kafka Streams binder health indicator issues
When there are multiple Kafka Streams processors present, the health indicator
overwrites the previous processor's health info. Addressing this issue.

Resolves #771
2019-10-17 19:26:41 -04:00
Soby Chacko
0283359d4a Add a delay before the metrics assertion 2019-10-15 15:11:05 -04:00
Soby Chacko
cc2f8f6137 Assertion was not commented out in the previous commit 2019-10-10 10:38:46 -04:00
Soby Chacko
855334aaa3 Disable kafka streams metrics test temporarily 2019-10-10 01:32:51 -04:00
Soby Chacko
9d708f836a Kafka streams default single input/output binding
Address an issue in which the binder still default to binding names "input" and "output"
in case of a single function.
2019-10-09 23:58:14 -04:00
Soby Chacko
ecc8715b0c Kafka Streams binder metrics
Export Kafka Streams metrics available through KafkaStreams#metrics into a Micrometer MeterRegistry.
Add documentation for how to access metrics.
Modify test to verify metrics.

Resolves #543
2019-10-09 00:32:16 -04:00
Soby Chacko
98431ed8a0 Fix spurious warnings from InteractiveQueryService
Set applicationId properly in functions with multiple inputs
2019-10-09 00:29:03 -04:00
Oleg Zhurakousky
c7fa1ce275 Fix how stream function properties for bindings are used 2019-10-08 06:05:50 -05:00
Soby Chacko
cc8c645c5a Address checkstyle issues 2019-10-04 09:00:15 -04:00
Gary Russell
21fe9c75c5 Fix race in KafkaBinderTests
`testDlqWithNativeSerializationEnabledOnDlqProducer`
2019-10-03 10:35:05 -04:00
Soby Chacko
65dd706a6a Kafka Streams DLQ enhancements
Use DeadLetterPublishingRecoverer from Spring Kafka instead of custom DLQ components in the binder.
Remove code that is no longer needed for DLQ purposes.
In Kafka Streams, always set group to application id if the user doesn't set an explicit group.

Upgrade Spring Kafka to 2.3.0 and SIK to 3.2.0

Resolves #761
2019-10-02 22:46:51 -04:00
Soby Chacko
a02308a5a3 Allow binding names to be reused in Kafka Streams.
Allow same binding names to be reused from multiple StreamListener methods in Kafka Streams binder.

Resolves #760
2019-10-01 13:58:46 -04:00
Oleg Zhurakousky
ac75f8fecf Merge pull request #758 from sobychacko/gh-757
Function level binding properties
2019-09-30 12:42:45 -04:00
Soby Chacko
bf6a227f32 Function level binding properties
If there are multiple functions in a Kafka Streams application, and if they want to have
a separate set of configuration for each, then it should be able to set that at the function
level. For e.g. spring.cloud.stream.kafka.streams.binder.functions....

Resolves #757
2019-09-30 11:47:21 -04:00
Oleg Zhurakousky
01daa4c0dd Move function constants to core
Resolves #756
2019-09-30 11:33:18 -04:00
Soby Chacko
021943ec41 Using dot(.) character in function bindings.
In Kafka Streams functions, binding names need to use dot character instead of
underscores as the delimiter.

Resolves #755
2019-09-30 11:02:50 -04:00
Phillip Verheyden
daf4b47d1c Update the docs with the correct default channel properties locations
Spring Cloud Stream Kafka uses `spring.cloud.stream.kafka.default` for setting default properties across all channels.

Resolves #748
2019-09-30 11:01:28 -04:00
Soby Chacko
2b1be3754d Remove deprecations
Remove deprecated fields, methods and classes in preparation for the 3.0 GA Release,
both in Kafka and Kafka Streams binders.

Resolves #746
2019-09-27 15:46:00 -04:00
Soby Chacko
db5a303431 Check HeaderMapper bean with a well known name (#753)
* Check HeaderMapper bean with a well known name

* If a custom bean name for header mapper is not provided through the binder property headerMapperBeanName,
  then look for a header mapper bean with the name kafkaHeaderMapper.
* Add tests to verify

Resolves #749

* Addressing PR review comments
2019-09-27 11:08:15 -04:00
Soby Chacko
e549090787 Restoring the avro tests for Kafka Streams 2019-09-23 17:12:31 -04:00
buildmaster
7ff64098a3 Going back to snapshots 2019-09-23 18:12:19 +00:00
buildmaster
cea40bc969 Update SNAPSHOT to 3.0.0.M4 2019-09-23 18:11:47 +00:00
buildmaster
717022e274 Going back to snapshots 2019-09-23 17:49:28 +00:00
buildmaster
d0f1c8d703 Update SNAPSHOT to 3.0.0.M4 2019-09-23 17:48:54 +00:00
Soby Chacko
7a532b2bbd Temporarily disable avro tests due to Schema Registry dependency issues.
Will address these post M4 release
2019-09-23 13:37:57 -04:00
Soby Chacko
3773fa2c05 Update SIK and SK
SIK -> 3.2.0.RC1
    SK - 2.3.0.RC1
2019-09-23 13:03:12 -04:00
buildmaster
5734ce689b Bumping versions 2019-09-18 15:35:31 +00:00
Olga Maciaszek-Sharma
608086ff4d Remove spring.provides. 2019-09-18 13:34:34 +02:00
Soby Chacko
16713b3b4f Rename Serde for message converters
Rename CompositeNonNativeSerde to MessageConverterDelegateSerde
Deprecate CompositeNonNativeSerde
2019-09-16 21:27:04 -04:00
Soby Chacko
2432a7309b Remove an unnecessary mapValue call
In Kafka Streams binder, with native decoding being the default in 3.0 and going forward,
the topology depth of the Kafka Streams processors have much reduced compared to the topology
generated when using non-native decoding. There was still an extra unncessary processor in the
topology even when the deserialization done natively. Addressing that issue.

With this change, the topology generated is equivalent to the native Kafka Streams applications.

Resolves #682
2019-09-13 15:14:59 -04:00
Soby Chacko
6fdb0001d6 Update SIK to 3.2.0 snapshot
Resolves #744

* Address deprecations in the pollable consumer
* Introduce a property for pollable time out in KafkaConsuerProperties
* Fix tests
* Add docs
* Addressin PR review comments
2019-09-12 15:31:13 -04:00
Gary Russell
f2ab4a07c6 GH-70: Support Batch Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/70
2019-09-11 15:31:18 -04:00
Soby Chacko
584115580b Adding isPresent around optionals 2019-09-11 14:54:11 -04:00
Oleg Zhurakousky
752b509374 Minor cleanup and polishing
Resolves #740
2019-09-10 13:34:30 +02:00
Soby Chacko
96062b23f2 State store beans registered with StreamListener
When StreamListener based processors used in Kafka Streams
applications, it is not possible to register state store beans
using StateStoreBuilder. This is allowed in the functional model.
This method of providing custom state stores is desired as this
gives the user more flexibility in configuring Serde's and other
properties on the state store. Adding this feature to StreamListner
based Kafka Streams processors.

Adding test to verify.

Adding docs.

Resolves #676
2019-09-10 13:34:30 +02:00
Oleg Zhurakousky
da6145f382 Merge pull request #738 from sobychacko/gh-681
Multi binder issues for Kafka Streams table types
2019-09-10 12:45:04 +02:00
Soby Chacko
39a5b25b9c Topic properties not applied on kstream binding
On KStream producer binding, topic properties are not applied
by the provisioner. This is because the extended properties object
that is passed to producer binding is erroneously overwritten by a
default instance. Addressing this issue.

Resolves #684
2019-09-06 16:28:15 -04:00
Soby Chacko
4e250b34cf Multi binder issues for Kafka Streams table types
When KTable or GlobalKTable binders are used in a multi bindder
environment, it has difficulty finding certain beans/properties.
Addressing this issue.

Adding tests.

Resolves #681
2019-09-04 18:10:04 -04:00
Oleg Zhurakousky
10de84f4fe Simplify isSerdeFromStandardDefaults in KeyValueSerdeResolver
Resolves #737
2019-09-04 16:33:54 +02:00
Soby Chacko
309f588325 Addressing Kafka Streams multiple functions issues
Fixing an issue that causes a race condition when multiple functions
are present in a Kafka Streams application by isolating the responsible
proxy factory per function and not shared.

When multiple Kafka Streams functions are present in an application,
it should be possible to set the application id per function.

When an application provides a bean of type Serde, then the binder
should try to introspect that bean to see if it can be matched for
any inbound or outbound serialization.

Adding tests to verify the changes.

Adding docs.

Resolves #734, #735, #736
2019-09-03 19:48:21 -04:00
Oleg Zhurakousky
ca8e086d4c Merge pull request #728 from sobychacko/gh-706
Introducing retry for finding state stores.
2019-08-28 13:37:06 +02:00
Oleg Zhurakousky
cce392aa94 Merge pull request #730 from sobychacko/gh-657
Additional docs for dlq producer properties
2019-08-28 13:36:30 +02:00
Soby Chacko
e53b0f0de9 Fixing Kafka streams binder health indicator tests
Resolves #731
2019-08-28 13:34:34 +02:00
Soby Chacko
413cc4d2b5 Addressing PR review 2019-08-27 17:37:33 -04:00
Soby Chacko
94f76b903f Additional docs for dlq producer properties
Resolves #657
2019-08-26 17:08:01 -04:00
Soby Chacko
8d5f794461 Remove the sleep added in the previous commit 2019-08-26 13:55:52 -04:00
Soby Chacko
00c13c0035 Fixing checkstyle issues 2019-08-26 12:20:39 -04:00
Soby Chacko
e1e46226d5 Fixing a test race condition
The test testDlqWithNativeSerializationEnabledOnDlqProducer fails on CI occasionally.
Adding a sleep of 1 second to give the retrying mechanism enough time to complete.
2019-08-26 12:17:11 -04:00
Walliee
46cbeb1804 Add hook to specify sendTimeoutExpression
resolves #724
2019-08-26 10:47:27 -04:00
Soby Chacko
3e4af104b8 Introducing retry for finding state stores.
During application startup, there are cases when state stores
might take longer time to initialize. The call to find the
state store in the binder (InteractiveQueryService) will fail in
those situations. Providing a basic retry mechanism using which
applicaitons can opt-in for this retry.

Adding properties for retrying state store at the binder level.
Adding docs.
Polishing.

Resolves #706
2019-08-23 18:57:36 -04:00
Soby Chacko
16bb3e2f62 Testing topic provisioning properties for KStream
Resolves #684
2019-08-23 18:53:35 -04:00
Soby Chacko
8145ab19fb Fix checkstyle issues 2019-08-21 17:26:31 -04:00
Gary Russell
930d33aeba Fix ConsumerProducerTransactionTests with SK snap 2019-08-21 09:46:17 -04:00
Gary Russell
fe2a398b8b Fix mock producer.close() for latest SK snapshot 2019-08-21 09:38:25 -04:00
Soby Chacko
24e1fc9722 Kafka Streams functional support and autowiring
There are issues when a bean declared as a function in the Kafka Streams
application tries to autowire a bean through method parameter injection.
Addressing these concerns.

Resolves #726
2019-08-20 18:40:04 -04:00
Soby Chacko
245a43c1d8 Update Spring Kafka to 2.3.0 snapshot
Ignore two tests temporarily
Fixing Kafka Streams tests with co-partitioning issues
2019-08-20 17:40:07 -04:00
Soby Chacko
4d1fed63ee Fix checkstyle issues 2019-08-20 15:34:05 -04:00
Oleg Zhurakousky
786bd3ec62 Merge pull request #723 from sobychacko/kstream-fn-detector-changes
Function detector condition Kafka Streams
2019-08-16 18:44:39 +02:00
Soby Chacko
183f21c880 Function detector condition Kafka Streams
Use BeanFactoryUtils.beanNamesForTypeIncludingAncestors instead of
getBean from BeanFactory which forces the bean creation inside the
function detector condition. There was a race condition in which
applications were unable to autowire beans and use them in functions
while the detector condition was creating the beans. This change will
delay the creation of the function bean until it is needed.
2019-08-16 11:54:42 -04:00
Soby Chacko
18737b8fea Unignoring tests in Kafka Streams binder
Polishing tests
2019-08-14 12:57:18 -04:00
buildmaster
840745e593 Going back to snapshots 2019-08-13 07:58:51 +00:00
buildmaster
2df0377acb Update SNAPSHOT to 3.0.0.M3 2019-08-13 07:57:57 +00:00
Oleg Zhurakousky
164948ad33 Bumped spring-kafka and si-kafka snapshots to milestones 2019-08-13 09:43:09 +02:00
Oleg Zhurakousky
a472845cb4 Adjusted docs with recent s-c-build updates 2019-08-13 09:29:03 +02:00
Oleg Zhurakousky
12db5fc20e Ignored failing tests 2019-08-13 09:24:30 +02:00
Soby Chacko
46f1b41832 Interop between bootstrap server configuration
When using the Kafka Streams binder, if the application chooses to
provide bootstrap server configuration through Kafka binder broker
property, then allow that. This way either type of broker config
works in Kafka Streams binder.

Resolves #401
Resolves #720
2019-08-13 08:45:05 +02:00
Soby Chacko
64ca773a4f Kafka Streams application id changes
Generate a random application id for Kafka Streams binder if the user
doesn't set one for the application. This is useful for development
purposes, as it avoids the creation of an explicit application id.
For production workloads, it is highly recommended to explicitly provide
and application id.

The gnerated application id follows a patter where it uses the function bean
name followed by a random UUID string which is followed by the literal appplicaitonId.
In the case of StreamListener, instead of function bean name, it uses the containing class +
StreamListener method name.

If the binder generates the application id, that information will be logged on the console
at startup.

Resolves #718
Resolves #719
2019-08-13 08:43:07 +02:00
Soby Chacko
a92149f121 Adding BiConsumer support for Kafka Streams binder
Resolves #716
Resolves #717
2019-08-13 08:31:50 +02:00
Gary Russell
2721fe4d05 GH-713: Add test for e2e transactional binder
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/713

The problem was already fixed in SIK.

Resolves #714
2019-08-13 08:29:09 +02:00
Soby Chacko
8907693ffb Revamping Kafka Streams docs for functional style
Resolves #703
Resolves #721
Addressing PR review comments

Addressing PR review comments
2019-08-13 08:17:48 +02:00
Oleg Zhurakousky
ca0bfc028f minor cleanup 2019-08-08 20:59:05 +02:00
Soby Chacko
688f05cbc9 Joining two input KStreams
When joining two input KStreams, the binder throws an exceptin.
Fixing this issue by passing the wrapped target KStream from the
proxy object to the adapted StreamListener method.

Resolves #701
2019-08-08 14:19:21 -04:00
Oleg Zhurakousky
acb4180ea3 GH-1767-core changes related to the disconnection of @StreamMessageConverter 2019-08-06 19:14:49 +02:00
Soby Chacko
77e4087871 Handle Deserialization errors when there is a key (#711)
* Handle Deserialization errors when there is a key

Handle DLQ sending on deserialization errors when there is a key in the record.

Resolves #635

* Adding keySerde information in the functional bindings
2019-08-05 13:58:48 -04:00
Gary Russell
0d339e77b8 GH-683: Update docs 2019-08-02 09:30:36 -04:00
Gary Russell
799eb5be28 GH-683: MessageKey header from payload property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/683

If the `messageKeyExpression` references the payload or entire message, add an
interceptor to evaluate the expression before the payload is converted.

The interceptor is not needed when native encoding is in use because the payload
will be unchanged when it reaches the adapter.
2019-08-01 17:00:05 -04:00
Soby Chacko
2c7615db1f Temporarily ignore a test 2019-08-01 14:32:39 -04:00
Gary Russell
b2b1438ad7 Fix resetOffsets for manual partition assignment 2019-07-19 18:47:41 -04:00
Gary Russell
87399fe904 Updates for latest S-K snapshot
- change `TopicPartitionInitialOffset` to `TopicPartitionOffset`
- when resetting with manual assignment, set the SeekPosition earlier rather than
  relying on direct updat of the `ContainerProperties` field
- set `missingTopicsFatal` to `false` in mock test to avoid log messages about missing bootstrap servers
2019-07-17 18:39:53 -04:00
Soby Chacko
f03e3e17c6 Provide a CollectionSerde implementation
Resolves #702
2019-07-11 17:08:25 -04:00
Gary Russell
48aae76ec9 GH-694: Add recordMetadataChannel (send confirm.)
Resolves #694

Also fix deprecation of `ChannelInterceptorAware`.
2019-07-10 10:48:08 -04:00
Soby Chacko
6976bcd3a8 Docs polishing 2019-07-09 15:59:23 -04:00
Gary Russell
912ab14309 GH-689: Support producer-only transactions
Resolves #689
2019-07-09 15:59:23 -04:00
Oleg Zhurakousky
b4fcb791c3 General polishing and clean up after review
Resolves #692
2019-07-09 20:46:01 +02:00
Soby Chacko
ca3f20ff26 Default multiple input/output binding names in the functional support will be separated usiung underscores (_).
For e.g. process_in, process_in_0, process_out_0 etc.

Adding cleanup config to ktable integration tests.
2019-07-08 19:15:02 -04:00
Soby Chacko
97ecf48867 BiFunction support in Kafka Streams binder
* Introduce the ability to provide BiFunction beans as Kafka Streams processors.
* Bypass Spring Cloud Function catalogue for bean lookup by directly querying the bean factory.
* Add test to verify BiFunction behavior.

Resolves #690
2019-07-08 15:01:07 -04:00
Soby Chacko
4d0b62f839 Functional enhancements in Kafka Streams binder
This PR introduces some fundamental changes in the way functional model of Kafka Streams applications
are supported in the binder. For the most part, this PR is comprised of the following changes.

* Remove the usage of EnableBinding in Kafka Streams based Spring Cloud Stream applications where
  a bean of type java.util.function.Function or java.util.function.Consumer is provides (instead of
  a StreamListener). For StreamListener based Kafka Streams applications, EnableBinding is still
  necessary and required.

* Target types (KStream, KTable, GlobalKTable) will be inferred and bound by the binder through
  a new binder specific bindable proxy factory.

* By deault input bindings are named as <functionName>-input for functions with single input
  and <functionName>-input-0...<functionName>-input-n for functions with n-1 number of inputs.

* By deault output bindings are named as <functionName>-output for functions with single output
  and <functionName>-output-0...<functionName>-output-n for functions with n-1 number of outputs.

* If applications prefer custom input binding names, the defaults can be overridden through
  spring.cloud.stream.function.inputBindings.<funcationName>. Similarly if custom outpub binding names
  are needed, then that can be done through spring.cloud.streamfunction.outputBindings.<functionName>.

* Test changes

* Refactoring and polishing

Resolves #688
2019-07-08 15:01:07 -04:00
Oleg Zhurakousky
592b9a02d8 Binder changes related to GH-1729
addressed PR comment

Resolves #670
2019-07-08 17:32:52 +02:00
Soby Chacko
4ca58bea06 Polishing the previous upgrade commit
* Upgrade Kafaka versions used in tests.
 * EmbeddedKafkaRule changes in tests.
 * KafaTransactionTests changes (createProducer expectations in mockito).
 * Kafka Streams bootstrap server now return an ArrayList instead of String.
   Making the necessary changes in the binder to accommodate this.
 * Kafka Streams state store tests - retention window changes.
2019-07-06 14:24:09 -04:00
Jeff Maxwell
108ccbc572 Update kafka.version, spring-kafka.version
Update kafka.version to 2.3.0 and spring-kafka.version to 2.3.0.BUILD-SNAPSHOT

Resolves #696
2019-07-06 14:24:09 -04:00
buildmaster
303679b9f9 Going back to snapshots 2019-07-03 14:56:50 +00:00
119 changed files with 8724 additions and 3473 deletions

1
.gitignore vendored
View File

@@ -23,3 +23,4 @@ _site/
dump.rdb
.apt_generated
artifacts
.sts4-cache

51
.mvn/wrapper/MavenWrapperDownloader.java vendored Executable file → Normal file
View File

@@ -1,22 +1,18 @@
/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/
* Copyright 2007-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
@@ -24,11 +20,12 @@ import java.util.Properties;
public class MavenWrapperDownloader {
private static final String WRAPPER_VERSION = "0.5.6";
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL =
"https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar";
private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/"
+ WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
@@ -76,13 +73,13 @@ public class MavenWrapperDownloader {
}
}
}
System.out.println("- Downloading from: : " + url);
System.out.println("- Downloading from: " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if(!outputFile.getParentFile().exists()) {
if(!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output direcrory '" + outputFile.getParentFile().getAbsolutePath() + "'");
"- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
@@ -98,6 +95,16 @@ public class MavenWrapperDownloader {
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) {
String username = System.getenv("MVNW_USERNAME");
char[] password = System.getenv("MVNW_PASSWORD").toCharArray();
Authenticator.setDefault(new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, password);
}
});
}
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());

BIN
.mvn/wrapper/maven-wrapper.jar vendored Executable file → Normal file

Binary file not shown.

3
.mvn/wrapper/maven-wrapper.properties vendored Executable file → Normal file
View File

@@ -1 +1,2 @@
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip
wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar

View File

@@ -39,7 +39,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -60,7 +60,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -155,14 +155,15 @@ Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
@@ -227,9 +228,20 @@ The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
@@ -243,6 +255,8 @@ Default: null (If not specified, messages that result in errors are forwarded to
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
@@ -283,11 +297,32 @@ The replication factor to use when provisioning topics. Overrides the binder-wid
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
@@ -310,6 +345,13 @@ sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
@@ -317,7 +359,8 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
@@ -352,6 +395,16 @@ Set to `true` to override the default binding destination (topic name) with the
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -365,6 +418,12 @@ Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Usage examples
@@ -526,6 +585,60 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M2</version>
<version>3.1.0.M1</version>
</parent>
<packaging>pom</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
@@ -15,311 +15,50 @@
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<spring-doc-resources.version>0.1.1.RELEASE</spring-doc-resources.version>
<spring-asciidoctor-extensions.version>0.1.0.RELEASE
</spring-asciidoctor-extensions.version>
<asciidoctorj-pdf.version>1.5.0-alpha.16</asciidoctorj-pdf.version>
<maven.plugin.plugin.version>3.4</maven.plugin.plugin.version>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>docs</id>
<build>
<plugins>
<plugin>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>${maven-dependency-plugin.version}</version>
<inherited>false</inherited>
<executions>
<execution>
<id>unpack-docs</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-build-docs
</artifactId>
<version>${spring-cloud-build.version}
</version>
<classifier>sources</classifier>
<type>jar</type>
<overWrite>false</overWrite>
<outputDirectory>${docs.resources.dir}
</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
<execution>
<id>unpack-docs-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>io.spring.docresources</groupId>
<artifactId>spring-doc-resources</artifactId>
<version>${spring-doc-resources.version}</version>
<type>zip</type>
<overWrite>true</overWrite>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-asciidoc-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
<resources>
<resource>
<directory>src/main/asciidoc</directory>
<filtering>false</filtering>
<excludes>
<exclude>ghpages.sh</exclude>
</excludes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${asciidoctor-maven-plugin.version}</version>
<inherited>false</inherited>
<dependencies>
<dependency>
<groupId>io.spring.asciidoctor</groupId>
<artifactId>spring-asciidoctor-extensions</artifactId>
<version>${spring-asciidoctor-extensions.version}</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>${asciidoctorj-pdf.version}</version>
</dependency>
</dependencies>
<configuration>
<sourceDirectory>${project.build.directory}/refdocs/</sourceDirectory>
<sourceDocumentName>${docs.main}.adoc</sourceDocumentName>
<attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
<!-- <docs-url>https://cloud.spring.io/</docs-url> -->
<!-- <docs-version></docs-version> -->
<docs-version>${project.version}/</docs-version>
<docs-url>https://cloud.spring.io/spring-cloud-static/</docs-url>
</attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
</attributes>
</configuration>
<executions>
<execution>
<id>generate-html-documentation</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html5</backend>
<sourceHighlighter>highlight.js</sourceHighlighter>
<doctype>book</doctype>
<attributes>
// these attributes are required to use the doc resources
<docinfo>shared</docinfo>
<stylesdir>css/</stylesdir>
<stylesheet>spring.css</stylesheet>
<linkcss>true</linkcss>
<icons>font</icons>
<highlightjsdir>js/highlight</highlightjsdir>
<highlightjs-theme>atom-one-dark-reasonable</highlightjs-theme>
<allow-uri-read>true</allow-uri-read>
<nofooter />
<toc>left</toc>
<toc-levels>4</toc-levels>
<spring-cloud-version>${project.version}</spring-cloud-version>
<sectlinks>true</sectlinks>
</attributes>
<outputFile>${docs.main}.html</outputFile>
</configuration>
</execution>
<execution>
<id>generate-docbook</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
<execution>
<id>generate-index</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>${maven-antrun-plugin.version}</version>
<dependencies>
<dependency>
<groupId>ant-contrib</groupId>
<artifactId>ant-contrib</artifactId>
<version>1.0b3</version>
<exclusions>
<exclusion>
<groupId>ant</groupId>
<artifactId>ant</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.tigris.antelope</groupId>
<artifactId>antelopetasks</artifactId>
<version>3.2.10</version>
</dependency>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
<version>1.7.17</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj</artifactId>
<version>1.5.8</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>readme</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<java classname="org.jruby.Main" failonerror="yes">
<arg
value="${docs.resources.dir}/ruby/generate_readme.sh" />
<arg value="-o" />
<arg value="${main.basedir}/README.adoc" />
</java>
</target>
</configuration>
</execution>
<execution>
<id>assert-no-unresolved-links</id>
<phase>prepare-package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<fileset id="unresolved.file"
dir="${basedir}/target/generated-docs/" includes="**/*.html">
<contains text="Unresolved" />
</fileset>
<fail message="[Unresolved] Found...failing">
<condition>
<resourcecount when="greater" count="0"
refid="unresolved.file" />
</condition>
</fail>
</target>
</configuration>
</execution>
<execution>
<id>setup-maven-properties</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<taskdef
resource="net/sf/antcontrib/antcontrib.properties" />
<taskdef name="stringutil"
classname="ise.antelope.tasks.StringUtilTask" />
<var name="version-type" value="${project.version}" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp=".*\.(.*)"
replace="\1" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(M)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(RC)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="BUILD-(.*)"
replace="SNAPSHOT" />
<stringutil string="${version-type}"
property="spring-cloud-repo">
<lowercase />
</stringutil>
<var name="github-tag" value="v${project.version}" />
<propertyregex property="github-tag"
override="true" input="${github-tag}" regexp=".*SNAPSHOT"
replace="master" />
</target>
</configuration>
</execution>
<execution>
<id>copy-css</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>generate-documentation-index</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>copy-generated-html</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<inherited>false</inherited>
</plugin>
</plugins>
</build>

View File

@@ -1,7 +1,34 @@
[[kafka-dlq-processing]]
=== Dead-Letter Topic Processing
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
[[dlq-partition-selection]]
==== Dead-Letter Topic Partition Selection
By default, records are published to the Dead-Letter topic using the same partition as the original record.
This means the Dead-Letter topic must have at least as many partitions as the original record.
To change this behavior, add a `DlqPartitionFunction` implementation as a `@Bean` to the application context.
Only one such bean can be present.
The function is provided with the consumer group, the failed `ConsumerRecord` and the exception.
For example, if you always want to route to partition 0, you might use:
====
[source, java]
----
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, record, ex) -> 0;
}
----
====
NOTE: If you set a consumer binding's `dlqPartitions` property to 1 (and the binder's `minPartitionCount` is equal to `1`), there is no need to supply a `DlqPartitionFunction`; the framework will always use partition 0.
If you set a consumer binding's `dlqPartitions` property to a value greater than `1` (or the binder's `minPartitionCount` is greater than `1`), you **must** provide a `DlqPartitionFunction` bean, even if the partition count is the same as the original topic's.
[[dlq-handling]]
==== Handling Records in a Dead-Letter Topic
Because the framework cannot anticipate how users would want to dispose of dead-lettered messages, it does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
@@ -40,7 +40,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
@@ -135,14 +135,15 @@ Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
@@ -207,9 +208,20 @@ The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
@@ -223,6 +235,8 @@ Default: null (If not specified, messages that result in errors are forwarded to
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
@@ -263,11 +277,32 @@ The replication factor to use when provisioning topics. Overrides the binder-wid
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
@@ -290,6 +325,13 @@ sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
@@ -297,7 +339,8 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
@@ -332,6 +375,16 @@ Set to `true` to override the default binding destination (topic name) with the
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -345,6 +398,12 @@ Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Usage examples
@@ -506,6 +565,60 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels

View File

@@ -34,8 +34,6 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
*{spring-cloud-stream-version}*
[#index-link]
{docs-url}spring-cloud-stream/{docs-version}home.html
= Reference Guide
include::overview.adoc[]

36
mvnw vendored
View File

@@ -8,7 +8,7 @@
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
@@ -19,7 +19,7 @@
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven2 Start Up Batch script
# Maven Start Up Batch script
#
# Required ENV vars:
# ------------------
@@ -114,7 +114,6 @@ if $mingw ; then
M2_HOME="`(cd "$M2_HOME"; pwd)`"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
# TODO classpath?
fi
if [ -z "$JAVA_HOME" ]; then
@@ -212,7 +211,11 @@ else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
if [ -n "$MVNW_REPOURL" ]; then
jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
else
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
fi
while IFS="=" read key value; do
case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
esac
@@ -221,22 +224,38 @@ else
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if $cygwin; then
wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"`
fi
if command -v wget > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
wget "$jarUrl" -O "$wrapperJarPath"
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
wget "$jarUrl" -O "$wrapperJarPath"
else
wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath"
fi
elif command -v curl > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
curl -o "$wrapperJarPath" "$jarUrl"
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
curl -o "$wrapperJarPath" "$jarUrl" -f
else
curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
# For Cygwin, switch paths to Windows format before running javac
if $cygwin; then
javaClass=`cygpath --path --windows "$javaClass"`
fi
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
@@ -277,6 +296,11 @@ if $cygwin; then
MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
fi
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
exec "$JAVACMD" \

343
mvnw.cmd vendored Executable file → Normal file
View File

@@ -1,161 +1,182 @@
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven2 Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
FOR /F "tokens=1,2 delims==" %%A IN (%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties) DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
echo Found %WRAPPER_JAR%
) else (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
powershell -Command "(New-Object Net.WebClient).DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"
echo Finished downloading %WRAPPER_JAR%
)
@REM End of extension
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
if "%MVNW_VERBOSE%" == "true" (
echo Found %WRAPPER_JAR%
)
) else (
if not "%MVNW_REPOURL%" == "" (
SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
)
if "%MVNW_VERBOSE%" == "true" (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
)
powershell -Command "&{"^
"$webclient = new-object System.Net.WebClient;"^
"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
"}"^
"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
"}"
if "%MVNW_VERBOSE%" == "true" (
echo Finished downloading %WRAPPER_JAR%
)
)
@REM End of extension
@REM Provide a "standardized" way to retrieve the CLI args that will
@REM work with both Windows and non-Windows executions.
set MAVEN_CMD_LINE_ARGS=%*
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%

21
pom.xml
View File

@@ -2,20 +2,21 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M2</version>
<version>3.1.0.M1</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.2.0.M3</version>
<version>3.0.0.M1</version>
<relativePath />
</parent>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.5.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.2.0.M3</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>3.0.0.M2</spring-cloud-stream.version>
<spring-kafka.version>2.4.4.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.2.1.RELEASE</spring-integration-kafka.version>
<kafka.version>2.4.0</kafka.version>
<spring-cloud-schema-registry.version>1.1.0.M1</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.0.M1</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
@@ -113,8 +114,8 @@
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<version>${spring-cloud-schema-registry.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
@@ -123,6 +124,10 @@
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M2</version>
<version>3.1.0.M1</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -1 +0,0 @@
provides: spring-cloud-starter-stream-kafka

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M2</version>
<version>3.1.0.M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -1,39 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Map;
/**
* Properties for configuring topics.
*
* @author Gary Russell
* @since 2.0
* @deprecated in favor of {@link KafkaTopicProperties}
*/
@Deprecated
public class KafkaAdminProperties extends KafkaTopicProperties {
public Map<String, String> getConfiguration() {
return getProperties();
}
public void setConfiguration(Map<String, String> configuration) {
setProperties(configuration);
}
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import java.time.Duration;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
@@ -32,7 +33,6 @@ import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
@@ -52,6 +52,7 @@ import org.springframework.util.StringUtils;
* @author Gary Russell
* @author Rafal Zukowski
* @author Aldo Sinanaj
* @author Lukasz Kaminski
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
@@ -64,8 +65,6 @@ public class KafkaBinderConfigurationProperties {
private final KafkaProperties kafkaProperties;
private String[] zkNodes = new String[] { "localhost" };
/**
* Arbitrary kafka properties that apply to both producers and consumers.
*/
@@ -81,48 +80,22 @@ public class KafkaBinderConfigurationProperties {
*/
private Map<String, String> producerProperties = new HashMap<>();
private String defaultZkPort = "2181";
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
private String[] headers = new String[] {};
private int offsetUpdateTimeWindow = 10000;
private int offsetUpdateCount;
private int offsetUpdateShutdownTimeout = 2000;
private int maxWait = 100;
private boolean autoCreateTopics = true;
private boolean autoAddPartitions;
private int socketBufferSize = 2097152;
/**
* ZK session timeout in milliseconds.
*/
private int zkSessionTimeout = 10000;
/**
* ZK Connection timeout in milliseconds.
*/
private int zkConnectionTimeout = 10000;
private String requiredAcks = "1";
private short replicationFactor = 1;
private int fetchSize = 1024 * 1024;
private int minPartitionCount = 1;
private int queueSize = 8192;
/**
* Time to wait to get partition information in seconds; default 60.
*/
@@ -136,6 +109,13 @@ public class KafkaBinderConfigurationProperties {
*/
private String headerMapperBeanName;
/**
* Time between retries after AuthorizationException is caught in
* the ListenerContainer; defalt is null which disables retries.
* For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
*/
private Duration authorizationExceptionRetryInterval;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
this.kafkaProperties = kafkaProperties;
@@ -149,17 +129,6 @@ public class KafkaBinderConfigurationProperties {
return this.transaction;
}
/**
* No longer used.
* @return the connection String
* @deprecated connection to zookeeper is no longer necessary
*/
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@Deprecated
public String getZkConnectionString() {
return toConnectionString(this.zkNodes, this.defaultZkPort);
}
public String getKafkaConnectionString() {
return toConnectionString(this.brokers, this.defaultBrokerPort);
}
@@ -172,72 +141,6 @@ public class KafkaBinderConfigurationProperties {
return this.headers;
}
/**
* No longer used.
* @return the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateTimeWindow() {
return this.offsetUpdateTimeWindow;
}
/**
* No longer used.
* @return the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateCount() {
return this.offsetUpdateCount;
}
/**
* No longer used.
* @return the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateShutdownTimeout() {
return this.offsetUpdateShutdownTimeout;
}
/**
* Zookeeper nodes.
* @return the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public String[] getZkNodes() {
return this.zkNodes;
}
/**
* Zookeeper nodes.
* @param zkNodes the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}
/**
* Zookeeper port.
* @param defaultZkPort the port.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setDefaultZkPort(String defaultZkPort) {
this.defaultZkPort = defaultZkPort;
}
public String[] getBrokers() {
return this.brokers;
}
@@ -254,83 +157,6 @@ public class KafkaBinderConfigurationProperties {
this.headers = headers;
}
/**
* No longer used.
* @param offsetUpdateTimeWindow the window.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateTimeWindow(int offsetUpdateTimeWindow) {
this.offsetUpdateTimeWindow = offsetUpdateTimeWindow;
}
/**
* No longer used.
* @param offsetUpdateCount the count.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateCount(int offsetUpdateCount) {
this.offsetUpdateCount = offsetUpdateCount;
}
/**
* No longer used.
* @param offsetUpdateShutdownTimeout the timeout.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateShutdownTimeout(int offsetUpdateShutdownTimeout) {
this.offsetUpdateShutdownTimeout = offsetUpdateShutdownTimeout;
}
/**
* Zookeeper session timeout.
* @return the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkSessionTimeout() {
return this.zkSessionTimeout;
}
/**
* Zookeeper session timeout.
* @param zkSessionTimeout the timout
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkSessionTimeout(int zkSessionTimeout) {
this.zkSessionTimeout = zkSessionTimeout;
}
/**
* Zookeeper connection timeout.
* @return the timout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkConnectionTimeout() {
return this.zkConnectionTimeout;
}
/**
* Zookeeper connection timeout.
* @param zkConnectionTimeout the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkConnectionTimeout(int zkConnectionTimeout) {
this.zkConnectionTimeout = zkConnectionTimeout;
}
/**
* Converts an array of host values to a comma-separated String. It will append the
* default port value, if not already specified.
@@ -351,28 +177,6 @@ public class KafkaBinderConfigurationProperties {
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
}
/**
* No longer used.
* @return the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getMaxWait() {
return this.maxWait;
}
/**
* No longer user.
* @param maxWait the wait.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setMaxWait(int maxWait) {
this.maxWait = maxWait;
}
public String getRequiredAcks() {
return this.requiredAcks;
}
@@ -389,28 +193,6 @@ public class KafkaBinderConfigurationProperties {
this.replicationFactor = replicationFactor;
}
/**
* No longer used.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getFetchSize() {
return this.fetchSize;
}
/**
* No longer used.
* @param fetchSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setFetchSize(int fetchSize) {
this.fetchSize = fetchSize;
}
public int getMinPartitionCount() {
return this.minPartitionCount;
}
@@ -427,28 +209,6 @@ public class KafkaBinderConfigurationProperties {
this.healthTimeout = healthTimeout;
}
/**
* No longer used.
* @return the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getQueueSize() {
return this.queueSize;
}
/**
* No longer used.
* @param queueSize the queue size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setQueueSize(int queueSize) {
this.queueSize = queueSize;
}
public boolean isAutoCreateTopics() {
return this.autoCreateTopics;
}
@@ -465,30 +225,6 @@ public class KafkaBinderConfigurationProperties {
this.autoAddPartitions = autoAddPartitions;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @return the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public int getSocketBufferSize() {
return this.socketBufferSize;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @param socketBufferSize the size.
* @deprecated No longer used by the binder
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public void setSocketBufferSize(int socketBufferSize) {
this.socketBufferSize = socketBufferSize;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -619,6 +355,14 @@ public class KafkaBinderConfigurationProperties {
this.headerMapperBeanName = headerMapperBeanName;
}
public Duration getAuthorizationExceptionRetryInterval() {
return authorizationExceptionRetryInterval;
}
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
this.authorizationExceptionRetryInterval = authorizationExceptionRetryInterval;
}
/**
* Domain class that models transaction capabilities in Kafka.
*/
@@ -800,16 +544,6 @@ public class KafkaBinderConfigurationProperties {
this.kafkaProducerProperties.setConfiguration(configuration);
}
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
}
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
}
public KafkaTopicProperties getTopic() {
return this.kafkaProducerProperties.getTopic();
}

View File

@@ -26,10 +26,20 @@ import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
*/
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider {
/**
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();
/**
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
private KafkaProducerProperties producer = new KafkaProducerProperties();
/**
* @return {@link KafkaConsumerProperties}
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
public KafkaConsumerProperties getConsumer() {
return this.consumer;
}
@@ -38,6 +48,10 @@ public class KafkaBindingProperties implements BinderSpecificPropertiesProvider
this.consumer = consumer;
}
/**
* @return {@link KafkaProducerProperties}
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
public KafkaProducerProperties getProducer() {
return this.producer;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2018 the original author or authors.
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -19,8 +19,6 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
/**
* Extended consumer properties for Kafka binder.
*
@@ -86,40 +84,128 @@ public class KafkaConsumerProperties {
}
/**
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
private boolean ackEachRecord;
/**
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
private boolean autoRebalanceEnabled = true;
/**
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
private boolean autoCommitOffset = true;
/**
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*/
private Boolean autoCommitOnError;
/**
* The starting offset for new groups. Allowed values: earliest and latest.
*/
private StartOffset startOffset;
/**
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
private boolean resetOffsets;
/**
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
private boolean enableDlq;
/**
* The name of the DLQ topic to receive the error messages.
*/
private String dlqName;
/**
* Number of partitions to use on the DLQ.
*/
private Integer dlqPartitions;
/**
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
/**
* @deprecated No longer used by the binder.
*/
@Deprecated
private int recoveryInterval = 5000;
/**
* List of trusted packages to provide the header mapper.
*/
private String[] trustedPackages;
/**
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
private StandardHeaders standardHeaders = StandardHeaders.none;
/**
* The name of a bean that implements RecordMessageConverter.
*/
private String converterBeanName;
/**
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
private long idleEventInterval = 30_000;
/**
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
private boolean destinationIsPattern;
/**
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Timeout used for polling in pollable consumers.
*/
private long pollTimeout = org.springframework.kafka.listener.ConsumerProperties.DEFAULT_POLL_TIMEOUT;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* @return if each record needs to be acknowledged.
*
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
public boolean isAckEachRecord() {
return this.ackEachRecord;
}
@@ -128,6 +214,13 @@ public class KafkaConsumerProperties {
this.ackEachRecord = ackEachRecord;
}
/**
* @return is autocommit offset enabled
*
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
public boolean isAutoCommitOffset() {
return this.autoCommitOffset;
}
@@ -136,6 +229,11 @@ public class KafkaConsumerProperties {
this.autoCommitOffset = autoCommitOffset;
}
/**
* @return start offset
*
* The starting offset for new groups. Allowed values: earliest and latest.
*/
public StartOffset getStartOffset() {
return this.startOffset;
}
@@ -144,6 +242,12 @@ public class KafkaConsumerProperties {
this.startOffset = startOffset;
}
/**
* @return if resetting offset is enabled
*
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
public boolean isResetOffsets() {
return this.resetOffsets;
}
@@ -152,6 +256,13 @@ public class KafkaConsumerProperties {
this.resetOffsets = resetOffsets;
}
/**
* @return is DLQ enabled.
*
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
public boolean isEnableDlq() {
return this.enableDlq;
}
@@ -160,6 +271,16 @@ public class KafkaConsumerProperties {
this.enableDlq = enableDlq;
}
/**
* @return is autocommit on error
*
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*/
public Boolean getAutoCommitOnError() {
return this.autoCommitOnError;
}
@@ -188,6 +309,12 @@ public class KafkaConsumerProperties {
this.recoveryInterval = recoveryInterval;
}
/**
* @return is auto rebalance enabled
*
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
public boolean isAutoRebalanceEnabled() {
return this.autoRebalanceEnabled;
}
@@ -196,6 +323,12 @@ public class KafkaConsumerProperties {
this.autoRebalanceEnabled = autoRebalanceEnabled;
}
/**
* @return a map of configuration
*
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -204,6 +337,11 @@ public class KafkaConsumerProperties {
this.configuration = configuration;
}
/**
* @return dlq name
*
* The name of the DLQ topic to receive the error messages.
*/
public String getDlqName() {
return this.dlqName;
}
@@ -212,6 +350,24 @@ public class KafkaConsumerProperties {
this.dlqName = dlqName;
}
/**
* @return number of partitions on the DLQ topic
*
* Number of partitions to use on the DLQ.
*/
public Integer getDlqPartitions() {
return this.dlqPartitions;
}
public void setDlqPartitions(Integer dlqPartitions) {
this.dlqPartitions = dlqPartitions;
}
/**
* @return trusted packages
*
* List of trusted packages to provide the header mapper.
*/
public String[] getTrustedPackages() {
return this.trustedPackages;
}
@@ -220,6 +376,12 @@ public class KafkaConsumerProperties {
this.trustedPackages = trustedPackages;
}
/**
* @return dlq producer properties
*
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
public KafkaProducerProperties getDlqProducerProperties() {
return this.dlqProducerProperties;
}
@@ -228,6 +390,12 @@ public class KafkaConsumerProperties {
this.dlqProducerProperties = dlqProducerProperties;
}
/**
* @return standard headers
*
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
@@ -236,6 +404,11 @@ public class KafkaConsumerProperties {
this.standardHeaders = standardHeaders;
}
/**
* @return converter bean name
*
* The name of a bean that implements RecordMessageConverter.
*/
public String getConverterBeanName() {
return this.converterBeanName;
}
@@ -244,6 +417,11 @@ public class KafkaConsumerProperties {
this.converterBeanName = converterBeanName;
}
/**
* @return idle event interval
*
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
public long getIdleEventInterval() {
return this.idleEventInterval;
}
@@ -252,6 +430,11 @@ public class KafkaConsumerProperties {
this.idleEventInterval = idleEventInterval;
}
/**
* @return is destination given through a pattern
*
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
public boolean isDestinationIsPattern() {
return this.destinationIsPattern;
}
@@ -261,28 +444,10 @@ public class KafkaConsumerProperties {
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
@@ -291,4 +456,30 @@ public class KafkaConsumerProperties {
this.topic = topic;
}
/**
* @return timeout in pollable consumers
*
* Timeout used for polling in pollable consumers.
*/
public long getPollTimeout() {
return this.pollTimeout;
}
public void setPollTimeout(long pollTimeout) {
this.pollTimeout = pollTimeout;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
}

View File

@@ -21,7 +21,6 @@ import java.util.Map;
import javax.validation.constraints.NotNull;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.expression.Expression;
/**
@@ -34,24 +33,77 @@ import org.springframework.expression.Expression;
*/
public class KafkaProducerProperties {
/**
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
private int bufferSize = 16384;
/**
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
private CompressionType compressionType = CompressionType.none;
/**
* Whether the producer is synchronous.
*/
private boolean sync;
/**
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
private Expression sendTimeoutExpression;
/**
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
private int batchTimeout;
/**
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
private Expression messageKeyExpression;
/**
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
private String[] headerPatterns;
/**
* Map with a key/value pair containing generic Kafka producer properties.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
private boolean useTopicHeader;
/**
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
private String recordMetadataChannel;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* @return buffer size
*
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
public int getBufferSize() {
return this.bufferSize;
}
@@ -60,6 +112,12 @@ public class KafkaProducerProperties {
this.bufferSize = bufferSize;
}
/**
* @return compression type {@link CompressionType}
*
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
@NotNull
public CompressionType getCompressionType() {
return this.compressionType;
@@ -69,6 +127,11 @@ public class KafkaProducerProperties {
this.compressionType = compressionType;
}
/**
* @return if synchronous sending is enabled
*
* Whether the producer is synchronous.
*/
public boolean isSync() {
return this.sync;
}
@@ -77,6 +140,25 @@ public class KafkaProducerProperties {
this.sync = sync;
}
/**
* @return timeout expression for send
*
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
public Expression getSendTimeoutExpression() {
return this.sendTimeoutExpression;
}
public void setSendTimeoutExpression(Expression sendTimeoutExpression) {
this.sendTimeoutExpression = sendTimeoutExpression;
}
/**
* @return batch timeout
*
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
public int getBatchTimeout() {
return this.batchTimeout;
}
@@ -85,6 +167,11 @@ public class KafkaProducerProperties {
this.batchTimeout = batchTimeout;
}
/**
* @return message key expression
*
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
public Expression getMessageKeyExpression() {
return this.messageKeyExpression;
}
@@ -93,6 +180,12 @@ public class KafkaProducerProperties {
this.messageKeyExpression = messageKeyExpression;
}
/**
* @return header patterns
*
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
public String[] getHeaderPatterns() {
return this.headerPatterns;
}
@@ -101,6 +194,11 @@ public class KafkaProducerProperties {
this.headerPatterns = headerPatterns;
}
/**
* @return map of configuration
*
* Map with a key/value pair containing generic Kafka producer properties.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
@@ -110,28 +208,10 @@ public class KafkaProducerProperties {
}
/**
* No longer used; get properties such as this via {@link #getTopic()}.
* @return Kafka admin properties
* @deprecated No longer used
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
@SuppressWarnings("deprecation")
public KafkaAdminProperties getAdmin() {
// Temporary workaround to copy the topic properties to the admin one.
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
return kafkaAdminProperties;
}
@Deprecated
@SuppressWarnings("deprecation")
public void setAdmin(KafkaAdminProperties admin) {
this.topic = admin;
}
public KafkaTopicProperties getTopic() {
return this.topic;
}
@@ -140,6 +220,13 @@ public class KafkaProducerProperties {
this.topic = topic;
}
/**
* @return if using topic header
*
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
public boolean isUseTopicHeader() {
return this.useTopicHeader;
}
@@ -148,6 +235,33 @@ public class KafkaProducerProperties {
this.useTopicHeader = useTopicHeader;
}
/**
* @return record metadata channel
*
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
public String getRecordMetadataChannel() {
return this.recordMetadataChannel;
}
public void setRecordMetadataChannel(String recordMetadataChannel) {
this.recordMetadataChannel = recordMetadataChannel;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
/**
* Enumeration for compression types.
*/

View File

@@ -81,9 +81,9 @@ public class KafkaTopicProvisioner implements
// @checkstyle:on
InitializingBean {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private static final Log logger = LogFactory.getLog(KafkaTopicProvisioner.class);
private final Log logger = LogFactory.getLog(getClass());
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private final KafkaBinderConfigurationProperties configurationProperties;
@@ -93,6 +93,12 @@ public class KafkaTopicProvisioner implements
private RetryOperations metadataRetryOperations;
/**
* Create an instance.
* @param kafkaBinderConfigurationProperties the binder configuration properties.
* @param kafkaProperties the boot Kafka properties used to build the
* {@link AdminClient}.
*/
public KafkaTopicProvisioner(
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
@@ -112,7 +118,7 @@ public class KafkaTopicProvisioner implements
}
@Override
public void afterPropertiesSet() throws Exception {
public void afterPropertiesSet() {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
@@ -133,8 +139,8 @@ public class KafkaTopicProvisioner implements
public ProducerDestination provisionProducerDestination(final String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
if (logger.isInfoEnabled()) {
logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
@@ -185,8 +191,8 @@ public class KafkaTopicProvisioner implements
if (properties.getExtension().isDestinationIsPattern()) {
Assert.isTrue(!properties.getExtension().isEnableDlq(),
"enableDLQ is not allowed when listening to topic patterns");
if (this.logger.isDebugEnabled()) {
this.logger.debug("Listening to a topic pattern - " + name
if (logger.isDebugEnabled()) {
logger.debug("Listening to a topic pattern - " + name
+ " - no provisioning performed");
}
return new KafkaConsumerDestination(name);
@@ -242,7 +248,7 @@ public class KafkaTopicProvisioner implements
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
public static void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
@@ -263,8 +269,8 @@ public class KafkaTopicProvisioner implements
}
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && this.logger.isDebugEnabled()) {
this.logger.debug("Overrode boot property: [" + key + "], from: ["
if (replaced != null && KafkaTopicProvisioner.logger.isDebugEnabled()) {
KafkaTopicProvisioner.logger.debug("Overrode boot property: [" + key + "], from: ["
+ replaced + "] to: [" + value + "]");
}
}
@@ -274,12 +280,16 @@ public class KafkaTopicProvisioner implements
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
? properties.getExtension().getDlqName()
: "error." + name + "." + group;
int dlqPartitions = properties.getExtension().getDlqPartitions() == null
? partitions
: properties.getExtension().getDlqPartitions();
try {
createTopicAndPartitions(adminClient, dlqTopic, partitions,
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
}
@@ -326,7 +336,7 @@ public class KafkaTopicProvisioner implements
tolerateLowerPartitionsOnBroker, properties);
}
else {
this.logger.info("Auto creation of topics is disabled.");
logger.info("Auto creation of topics is disabled.");
}
}
@@ -373,7 +383,7 @@ public class KafkaTopicProvisioner implements
partitions.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: "
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
@@ -422,18 +432,18 @@ public class KafkaTopicProvisioner implements
catch (Exception ex) {
if (ex instanceof ExecutionException) {
if (ex.getCause() instanceof TopicExistsException) {
if (this.logger.isWarnEnabled()) {
this.logger.warn("Attempt to create topic: " + topicName
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName
+ ". Topic already exists.");
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
else {
this.logger.error("Failed to create topics", ex.getCause());
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
@@ -442,6 +452,14 @@ public class KafkaTopicProvisioner implements
}
}
/**
* Check that the topic has the expected number of partitions and return the partition information.
* @param partitionCount the expected count.
* @param tolerateLowerPartitionsOnBroker if false, throw an exception if there are not enough partitions.
* @param callable a Callable that will provide the partition information.
* @param topicName the topic./
* @return the partition information.
*/
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable, final String topicName) {
@@ -461,7 +479,7 @@ public class KafkaTopicProvisioner implements
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
this.logger.error("Failed to obtain partition information", ex);
logger.error("Failed to obtain partition information", ex);
}
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
@@ -488,7 +506,7 @@ public class KafkaTopicProvisioner implements
int partitionSize = CollectionUtils.isEmpty(partitions) ? 0 : partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
this.logger.warn("The number of expected partitions was: "
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
@@ -506,7 +524,7 @@ public class KafkaTopicProvisioner implements
});
}
catch (Exception ex) {
this.logger.error("Cannot initialize Binder", ex);
logger.error("Cannot initialize Binder", ex);
throw new BinderException("Cannot initialize binder:", ex);
}
}

View File

@@ -0,0 +1,76 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import org.apache.commons.logging.Log;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.lang.Nullable;
/**
* A TriFunction that takes a consumer group, consumer record, and throwable and returns
* which partition to publish to the dead letter topic. Returning {@code null} means Kafka
* will choose the partition.
*
* @author Gary Russell
* @since 3.0
*
*/
@FunctionalInterface
public interface DlqPartitionFunction {
/**
* Returns the same partition as the original recor.
*/
DlqPartitionFunction ORIGINAL_PARTITION = (group, rec, ex) -> rec.partition();
/**
* Returns 0.
*/
DlqPartitionFunction PARTITION_ZERO = (group, rec, ex) -> 0;
/**
* Apply the function.
* @param group the consumer group.
* @param record the consumer record.
* @param throwable the exception.
* @return the DLQ partition, or null.
*/
@Nullable
Integer apply(String group, ConsumerRecord<?, ?> record, Throwable throwable);
/**
* Determine the fallback function to use based on the dlq partition count if no
* {@link DlqPartitionFunction} bean is provided.
* @param dlqPartitions the partition count.
* @param logger the logger.
* @return the fallback.
*/
static DlqPartitionFunction determineFallbackFunction(@Nullable Integer dlqPartitions, Log logger) {
if (dlqPartitions == null) {
return ORIGINAL_PARTITION;
}
else if (dlqPartitions > 1) {
logger.error("'dlqPartitions' is > 1 but a custom DlqPartitionFunction bean is not provided");
return ORIGINAL_PARTITION;
}
else {
return PARTITION_ZERO;
}
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.0.0.M2</version>
<version>3.1.0.M1</version>
</parent>
<properties>
@@ -54,13 +54,6 @@
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
</dependency>
<!-- Added back since Kafka still depends on it, but it has been removed by Boot due to EOL -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
@@ -74,13 +67,13 @@
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
@@ -88,7 +81,7 @@
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<scope>test</scope>
</dependency>
<dependency>
@@ -121,4 +114,4 @@
</plugin>
</plugins>
</build>
</project>
</project>

View File

@@ -16,8 +16,9 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.Map;
import java.util.Properties;
import java.util.regex.Pattern;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
@@ -27,12 +28,16 @@ import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.processor.TimestampExtractor;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
@@ -40,6 +45,7 @@ import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.config.BindingProperties;
@@ -48,11 +54,17 @@ import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
@@ -64,17 +76,21 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
private static final Log LOG = LogFactory.getLog(AbstractKafkaStreamsBinderProcessor.class);
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final CleanupConfig cleanupConfig;
private final KeyValueSerdeResolver keyValueSerdeResolver;
protected ConfigurableApplicationContext applicationContext;
public AbstractKafkaStreamsBinderProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
@@ -82,50 +98,10 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
this.cleanupConfig = cleanupConfig;
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder,
String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
protected <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
protected GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
protected KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected Topology.AutoOffsetReset getAutoOffsetReset(String inboundName, KafkaStreamsConsumerProperties extendedConsumerProperties) {
@@ -154,13 +130,13 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
@SuppressWarnings("unchecked")
protected void handleKTableGlobalKTableInputs(Object[] arguments, int index, String input, Class<?> parameterType, Object targetBean,
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
KTable<?, ?> table = getKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
@@ -172,7 +148,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
GlobalKTable<?, ?> table = getGlobalKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
@@ -183,86 +159,171 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
}
}
@SuppressWarnings({"unchecked"})
@SuppressWarnings({ "unchecked" })
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
ApplicationContext applicationContext, String inboundName) {
ApplicationContext applicationContext, String inboundName,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanCustomizer customizer,
ConfigurableEnvironment environment, BindingProperties bindingProperties) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
if (kafkaStreamsBinderConfigurationProperties != null) {
final Map<String, KafkaStreamsBinderConfigurationProperties.Functions> functionConfigMap = kafkaStreamsBinderConfigurationProperties.getFunctions();
if (!CollectionUtils.isEmpty(functionConfigMap)) {
final KafkaStreamsBinderConfigurationProperties.Functions functionConfig = functionConfigMap.get(beanNamePostPrefix);
final Map<String, String> functionSpecificConfig = functionConfig.getConfiguration();
if (!CollectionUtils.isEmpty(functionSpecificConfig)) {
streamConfigGlobalProperties.putAll(functionSpecificConfig);
}
String applicationId = functionConfig.getApplicationId();
if (!StringUtils.isEmpty(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
}
}
final MutablePropertySources propertySources = environment.getPropertySources();
if (!StringUtils.isEmpty(bindingProperties.getBinder())) {
final KafkaStreamsBinderConfigurationProperties multiBinderKafkaStreamsBinderConfigurationProperties =
applicationContext.getBean(bindingProperties.getBinder() + "-KafkaStreamsBinderConfigurationProperties", KafkaStreamsBinderConfigurationProperties.class);
String connectionString = multiBinderKafkaStreamsBinderConfigurationProperties.getKafkaConnectionString();
if (StringUtils.isEmpty(connectionString)) {
connectionString = (String) propertySources.get(bindingProperties.getBinder() + "-kafkaStreamsBinderEnv").getProperty("spring.cloud.stream.kafka.binder.brokers");
}
else {
streamConfigGlobalProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, connectionString);
}
String binderProvidedApplicationId = multiBinderKafkaStreamsBinderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
SendToDlqAndContinue sendToDlqAndContinue = applicationContext.getBean(SendToDlqAndContinue.class);
streamConfigGlobalProperties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration())) {
streamConfigGlobalProperties.putAll(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration());
}
}
//this is only used primarily for StreamListener based processors. Although in theory, functions can use it,
//it is ideal for functions to use the approach used in the above if statement by using a property like
//spring.cloud.stream.kafka.streams.binder.functions.process.configuration.num.threads (assuming that process is the function name).
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
// We provide this for backward compatibility with StreamListener based processors.
// For function based processors see the approach used above
// (i.e. use a property like spring.cloud.stream.kafka.streams.binder.functions.process.applicationId).
if (StringUtils.hasText(bindingLevelApplicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationId);
bindingLevelApplicationId);
}
//If the application id is not set by any mechanism, then generate it.
streamConfigGlobalProperties.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
k -> {
String generatedApplicationID = beanNamePostPrefix + "-applicationId";
LOG.info("Binder Generated Kafka Streams Application ID: " + generatedApplicationID);
LOG.info("Use the binder generated application ID only for development and testing. ");
LOG.info("For production deployments, please consider explicitly setting an application ID using a configuration property.");
LOG.info("The generated applicationID is static and will be preserved over application restarts.");
return generatedApplicationID;
});
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
// Concurrency will be mapped to num.stream.threads. Since this is going into a global config,
// we are explicitly assigning concurrency left at default of 1 to num.stream.threads. Otherwise,
// a potential previous value might still be used in the case of multiple processors or a processor
// with multiple input bindings with various concurrency values.
// See this GH issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844
if (concurrency >= 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
.getBean("kafkaStreamsDlqDispatchers", Map.class);
// Override deserialization exception handlers per binding
final DeserializationExceptionHandler deserializationExceptionHandler =
extendedConsumerProperties.getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndFail) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndContinue) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
streamConfigGlobalProperties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER,
applicationContext.getBean(SendToDlqAndContinue.class));
}
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(
streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
kafkaStreamsDlqDispatchers);
return properties;
}
};
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties);
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
streamsBuilderFactoryBean.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(),
() -> streamsBuilder)
(Class<StreamsBuilderFactoryBean>) streamsBuilderFactoryBean.getClass(),
() -> streamsBuilderFactoryBean)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
return applicationContext.getBean(
extendedConsumerProperties.setApplicationId((String) streamConfigGlobalProperties.get(StreamsConfig.APPLICATION_ID_CONFIG));
//Removing the application ID from global properties so that the next function won't re-use it and cause race conditions.
streamConfigGlobalProperties.remove(StreamsConfig.APPLICATION_ID_CONFIG);
final StreamsBuilderFactoryBean streamsBuilderFactoryBeanFromContext = applicationContext.getBean(
"&stream-builder-" + beanNamePostPrefix, StreamsBuilderFactoryBean.class);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
//At this point, the StreamsBuilderFactoryBean is created. If the users call, getObject()
//in the customizer, that should grant access to the StreamsBuilder.
if (customizer != null) {
customizer.configure(streamsBuilderFactoryBean);
}
return streamsBuilderFactoryBeanFromContext;
}
protected Serde<?> getValueSerde(String inboundName, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, ResolvableType resolvableType) {
@@ -276,4 +337,136 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
return Serdes.ByteArray();
}
}
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
KStream<?, ?> stream;
if (this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName).isDestinationIsPattern()) {
final Pattern pattern = Pattern.compile(this.bindingServiceProperties.getBindingDestination(inboundName));
stream = streamsBuilder.stream(pattern);
}
else {
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
consumed);
}
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
return getkStream(bindingProperties, stream, nativeDecoding);
}
private KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
if (!nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType)) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
}
return stream;
}
@SuppressWarnings("rawtypes")
private void addStateStoreBeans(StreamsBuilder streamsBuilder) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
consumed, getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
consumed,
getMaterialized(storeName, k, v));
}
private GlobalKTable<?, ?> getGlobalKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.globalTable(bindingDestination,
consumed);
}
private KTable<?, ?> getKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.table(bindingDestination,
consumed);
}
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
Serde<K> keySerde, Serde<V> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
TimestampExtractor timestampExtractor = null;
if (!StringUtils.isEmpty(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName())) {
timestampExtractor = applicationContext.getBean(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName(),
TimestampExtractor.class);
}
final Consumed<K, V> consumed = Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset);
if (timestampExtractor != null) {
consumed.withTimestampExtractor(timestampExtractor);
}
return consumed;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* Enumeration for various {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} types.
*
* @author Soby Chacko
* @since 3.0.0
*/
public enum DeserializationExceptionHandler {
/**
* Deserialization error handler with log and continue.
* See {@link org.apache.kafka.streams.errors.LogAndContinueExceptionHandler}
*/
logAndContinue,
/**
* Deserialization error handler with log and fail.
* See {@link org.apache.kafka.streams.errors.LogAndFailExceptionHandler}
*/
logAndFail,
/**
* Deserialization error handler with DLQ send.
* See {@link org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler}
*/
sendToDlq
}

View File

@@ -0,0 +1,72 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.context.properties.ConfigurationPropertiesBindHandlerAdvisor;
import org.springframework.boot.context.properties.bind.AbstractBindHandler;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
/**
* {@link ConfigurationPropertiesBindHandlerAdvisor} to detect nativeEncoding/Decoding settings
* provided by the application explicitly.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class EncodingDecodingBindAdviceHandler implements ConfigurationPropertiesBindHandlerAdvisor {
private boolean encodingSettingProvided;
private boolean decodingSettingProvided;
public boolean isDecodingSettingProvided() {
return decodingSettingProvided;
}
public boolean isEncodingSettingProvided() {
return this.encodingSettingProvided;
}
@Override
public BindHandler apply(BindHandler bindHandler) {
BindHandler handler = new AbstractBindHandler(bindHandler) {
@Override
public <T> Bindable<T> onStart(ConfigurationPropertyName name,
Bindable<T> target, BindContext context) {
final String configName = name.toString();
if (configName.contains("use") && configName.contains("native") &&
(configName.contains("encoding") || configName.contains("decoding"))) {
BindResult<T> result = context.getBinder().bind(name, target);
if (result.isBound()) {
if (configName.contains("encoding")) {
EncodingDecodingBindAdviceHandler.this.encodingSettingProvided = true;
}
else {
EncodingDecodingBindAdviceHandler.this.decodingSettingProvided = true;
}
return target.withExistingValue(result.get());
}
}
return bindHandler.onStart(name, target, context);
}
};
return handler;
}
}

View File

@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -54,8 +52,6 @@ public class GlobalKTableBinder extends
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
@@ -63,11 +59,9 @@ public class GlobalKTableBinder extends
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaTopicProvisioner kafkaTopicProvisioner) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
}
@Override
@@ -76,12 +70,11 @@ public class GlobalKTableBinder extends
String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
this.binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}

View File

@@ -21,13 +21,15 @@ import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for GlobalKTable binder.
@@ -36,17 +38,14 @@ import org.springframework.context.annotation.Configuration;
* @since 2.1.0
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class GlobalKTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return KafkaStreamsBinderUtils.outerContextBeanFactoryPostProcessor();
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@@ -56,12 +55,28 @@ public class GlobalKTableBinderConfiguration {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
kafkaTopicProvisioner);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
};
}
}

View File

@@ -40,10 +40,13 @@ public class GlobalKTableBoundElementFactory
extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
@@ -54,6 +57,11 @@ public class GlobalKTableBoundElementFactory
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
@@ -90,8 +98,9 @@ public class GlobalKTableBoundElementFactory
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override

View File

@@ -16,9 +16,16 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
@@ -27,6 +34,10 @@ import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.StreamsMetadata;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
@@ -37,10 +48,13 @@ import org.springframework.util.StringUtils;
*
* @author Soby Chacko
* @author Renwei Han
* @author Serhii Siryi
* @since 2.1.0
*/
public class InteractiveQueryService {
private static final Log LOG = LogFactory.getLog(InteractiveQueryService.class);
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
@@ -64,18 +78,37 @@ public class InteractiveQueryService {
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
RetryTemplate retryTemplate = new RetryTemplate();
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate.execute(context -> {
T store = null;
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
Throwable throwable = null;
while (iterator.hasNext()) {
try {
store = iterator.next().store(storeName, storeType);
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
catch (InvalidStateStoreException ignored) {
// pass through
if (store != null) {
return store;
}
}
return null;
throw new IllegalStateException("Error when retrieving state store: j " + storeName, throwable);
});
}
/**
@@ -124,4 +157,25 @@ public class InteractiveQueryService {
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
}
/**
* Gets the list of {@link HostInfo} where the provided store is hosted on.
* It also can include current host info.
* Kafka Streams will look through all the consumer instances under the same application id
* and retrieves all hosts info.
*
* Note that the end-user applications must provide `application.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then an empty list will be returned.
*
* @param store store name
* @return the list of {@link HostInfo} where provided store is hosted on
*/
public List<HostInfo> getAllHostsInfo(String store) {
return kafkaStreamsRegistry.getKafkaStreams()
.stream()
.flatMap(k -> k.allMetadataForStore(store).stream())
.filter(Objects::nonNull)
.map(StreamsMetadata::hostInfo)
.collect(Collectors.toList());
}
}

View File

@@ -16,15 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.springframework.aop.framework.Advised;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
@@ -74,37 +74,34 @@ class KStreamBinder extends
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KeyValueSerdeResolver keyValueSerdeResolver) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
}
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
this.kafkaStreamsBindingInformationCatalogue
.registerConsumerProperties(inputTarget, properties.getExtension());
KStream<Object, Object> delegate = ((KStreamBoundElementFactory.KStreamWrapperHandler)
((Advised) inputTarget).getAdvisors()[0].getAdvice()).getDelegate();
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(delegate, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
this.binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@@ -113,15 +110,16 @@ class KStreamBinder extends
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name,
KStream<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name,
extendedProducerProperties);
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties =
(ExtendedProducerProperties) properties;
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver
.getOuboundKeySerde(properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable());
LOG.info("Key Serde used for (outbound) " + name + ": " + keySerde.getClass().getName());
Serde<?> valueSerde;
if (properties.isUseNativeEncoding()) {
valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
@@ -130,26 +128,39 @@ class KStreamBinder extends
else {
valueSerde = Serdes.ByteArray();
}
LOG.info("Value Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
(Serde<Object>) keySerde, (Serde<Object>) valueSerde);
(Serde<Object>) keySerde, (Serde<Object>) valueSerde, properties.getExtension());
return new DefaultBinding<>(name, null, outboundBindTarget, null);
}
@SuppressWarnings("unchecked")
private void to(boolean isNativeEncoding, String name,
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde) {
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
Serde<Object> valueSerde, KafkaStreamsProducerProperties properties) {
final Produced<Object, Object> produced = Produced.with(keySerde, valueSerde);
StreamPartitioner streamPartitioner = null;
if (!StringUtils.isEmpty(properties.getStreamPartitionerBeanName())) {
streamPartitioner = getApplicationContext().getBean(properties.getStreamPartitionerBeanName(),
StreamPartitioner.class);
}
if (streamPartitioner != null) {
produced.withStreamPartitioner(streamPartitioner);
}
if (!isNativeEncoding) {
LOG.info("Native encoding is disabled for " + name
+ ". Outbound message conversion done by Spring Cloud Stream.");
outboundBindTarget.filter((k, v) -> v == null)
.to(name, produced);
this.kafkaStreamsMessageConversionDelegate
.serializeOnOutbound(outboundBindTarget)
.to(name, Produced.with(keySerde, valueSerde));
.to(name, produced);
}
else {
LOG.info("Native encoding is enabled for " + name
+ ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
outboundBindTarget.to(name, produced);
}
}

View File

@@ -16,9 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
@@ -40,7 +37,8 @@ import org.springframework.context.annotation.Import;
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class })
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class KStreamBinderConfiguration {
@Bean
@@ -58,12 +56,10 @@ public class KStreamBinderConfiguration {
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver,
kafkaStreamsDlqDispatchers);
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return kStreamBinder;
@@ -79,10 +75,6 @@ public class KStreamBinderConfiguration {
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsMessageConversionDelegate.class.getSimpleName(),
outerContext.getBean(KafkaStreamsMessageConversionDelegate.class));

View File

@@ -43,12 +43,15 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
@@ -59,6 +62,11 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
@@ -74,6 +82,11 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
producerProperties = this.bindingServiceProperties.getProducerProperties(name);
producerProperties.setUseNativeEncoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isEncodingSettingProvided()) {
producerProperties.setUseNativeEncoding(true);
}
}
return createProxyForKStream(name);
}
@@ -102,15 +115,16 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
private static class KStreamWrapperHandler
static class KStreamWrapperHandler
implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
public void wrap(KStream<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
@@ -133,6 +147,9 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
}
KStream<Object, Object> getDelegate() {
return delegate;
}
}
}

View File

@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -55,19 +53,15 @@ class KTableBinder extends
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
// @checkstyle:on
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KafkaTopicProvisioner kafkaTopicProvisioner) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
}
@Override
@@ -78,12 +72,11 @@ class KTableBinder extends
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties,
this.kafkaStreamsDlqDispatchers);
this.binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}

View File

@@ -21,13 +21,15 @@ import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for KTable binder.
@@ -36,17 +38,14 @@ import org.springframework.context.annotation.Configuration;
*/
@SuppressWarnings("ALL")
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return KafkaStreamsBinderUtils.outerContextBeanFactoryPostProcessor();
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@@ -56,11 +55,28 @@ public class KTableBinderConfiguration {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
kafkaTopicProvisioner);
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
};
}
}

View File

@@ -38,10 +38,13 @@ import org.springframework.util.Assert;
class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(KTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
@@ -52,6 +55,11 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
@@ -86,8 +94,9 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
public void wrap(KTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override

View File

@@ -1,48 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Application support configuration for Kafka Streams binder.
*
* @deprecated Features provided on this class can be directly configured in the application itself using Kafka Streams.
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@Deprecated
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
public TimeWindows configuredTimeWindow(
KafkaStreamsApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength())
.advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,52 +16,140 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.time.Duration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Health indicator for Kafka Streams.
*
* @author Arnaud Jardiné
* @author Soby Chacko
*/
class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator implements DisposableBean {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties configurationProperties;
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry) {
private final Map<String, Object> adminClientProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private static final ThreadLocal<Status> healthStatusThreadLocal = new ThreadLocal<>();
private AdminClient adminClient;
private final Lock lock = new ReentrantLock();
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
super("Kafka-streams health check failed");
kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaStreamsBinderConfigurationProperties;
this.adminClientProperties = kafkaProperties.buildAdminProperties();
KafkaTopicProvisioner.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaStreamsBinderConfigurationProperties);
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
up &= kStream.state().isRunning();
builder.withDetails(buildDetails(kStream));
try {
this.lock.lock();
if (this.adminClient == null) {
this.adminClient = AdminClient.create(this.adminClientProperties);
}
final Status status = healthStatusThreadLocal.get();
//If one of the kafka streams binders (kstream, ktable, globalktable) was down before on the same request,
//retrieve that from the thead local storage where it was saved before. This is done in order to avoid
//the duration of the total health check since in the case of Kafka Streams each binder tries to do
//its own health check and since we already know that this is DOWN, simply pass that information along.
if (status == Status.DOWN) {
builder.withDetail("No topic information available", "Kafka broker is not reachable");
builder.status(Status.DOWN);
}
else {
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
builder.status(Status.UNKNOWN);
}
else {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
up &= kStream.state().isRunning();
builder.withDetails(buildDetails(kStream));
}
builder.status(up ? Status.UP : Status.DOWN);
}
}
}
catch (Exception e) {
builder.withDetail("No topic information available", "Kafka broker is not reachable");
builder.status(Status.DOWN);
builder.withException(e);
//Store binder down status into a thread local storage.
healthStatusThreadLocal.set(Status.DOWN);
}
finally {
this.lock.unlock();
}
builder.status(up ? Status.UP : Status.DOWN);
}
private static Map<String, Object> buildDetails(KafkaStreams kStreams) {
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) {
final Map<String, Object> details = new HashMap<>();
if (kStreams.state().isRunning()) {
for (ThreadMetadata metadata : kStreams.localThreadsMetadata()) {
details.put("threadName", metadata.threadName());
details.put("threadState", metadata.threadState());
details.put("activeTasks", taskDetails(metadata.activeTasks()));
details.put("standbyTasks", taskDetails(metadata.standbyTasks()));
final Map<String, Object> perAppdIdDetails = new HashMap<>();
if (kafkaStreams.state().isRunning()) {
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
perAppdIdDetails.put("threadName", metadata.threadName());
perAppdIdDetails.put("threadState", metadata.threadState());
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
perAppdIdDetails.put("consumerClientId", metadata.consumerClientId());
perAppdIdDetails.put("restoreConsumerClientId", metadata.restoreConsumerClientId());
perAppdIdDetails.put("producerClientIds", metadata.producerClientIds());
perAppdIdDetails.put("activeTasks", taskDetails(metadata.activeTasks()));
perAppdIdDetails.put("standbyTasks", taskDetails(metadata.standbyTasks()));
}
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, perAppdIdDetails);
}
else {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, String.format("The processor with application.id %s is down", applicationId));
}
return details;
}
@@ -70,12 +158,29 @@ class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
final Map<String, Object> details = new HashMap<>();
for (TaskMetadata metadata : taskMetadata) {
details.put("taskId", metadata.taskId());
details.put("partitions",
metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList()));
if (details.containsKey("partitions")) {
@SuppressWarnings("unchecked")
List<String> partitionsInfo = (List<String>) details.get("partitions");
partitionsInfo.addAll(addPartitionsInfo(metadata));
}
else {
details.put("partitions",
addPartitionsInfo(metadata));
}
}
return details;
}
private static List<String> addPartitionsInfo(TaskMetadata metadata) {
return metadata.topicPartitions().stream().map(
p -> "partition=" + p.partition() + ", topic=" + p.topic())
.collect(Collectors.toList());
}
@Override
public void destroy() throws Exception {
if (adminClient != null) {
adminClient.close(Duration.ofSeconds(0));
}
}
}

View File

@@ -16,9 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -35,8 +38,11 @@ class KafkaStreamsBinderHealthIndicatorConfiguration {
@Bean
@ConditionalOnBean(KafkaStreamsRegistry.class)
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry);
KafkaStreamsRegistry kafkaStreamsRegistry, @Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
KafkaProperties kafkaProperties, KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry, kafkaStreamsBinderConfigurationProperties,
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
}
}

View File

@@ -0,0 +1,113 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.function.ToDoubleFunction;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.apache.kafka.common.Metric;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Kafka Streams binder metrics implementation that exports the metrics available
* through {@link KafkaStreams#metrics()} into a micrometer {@link io.micrometer.core.instrument.MeterRegistry}.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBinderMetrics {
private final MeterRegistry meterRegistry;
private MeterBinder meterBinder;
public KafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
public void bindTo(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans, MeterRegistry meterRegistry) {
if (this.meterBinder == null) {
this.meterBinder = new MeterBinder() {
@Override
@SuppressWarnings("unchecked")
public void bindTo(MeterRegistry registry) {
if (streamsBuilderFactoryBeans != null) {
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
Set<String> meterNames = new HashSet<>();
for (Map.Entry<MetricName, ? extends Metric> metric : metrics.entrySet()) {
final String sanitized = sanitize(metric.getKey().group() + "." + metric.getKey().name());
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
final String name = streamsBuilderFactoryBeans.size() > 1 ? applicationId + "." + sanitized : sanitized;
final Gauge.Builder<KafkaStreamsBinderMetrics> builder =
Gauge.builder(name, this,
toDoubleFunction(metric.getValue()));
final Map<String, String> tags = metric.getKey().tags();
for (Map.Entry<String, String> tag : tags.entrySet()) {
builder.tag(tag.getKey(), tag.getValue());
}
if (!meterNames.contains(name)) {
builder.description(metric.getKey().description())
.register(meterRegistry);
meterNames.add(name);
}
}
}
}
}
ToDoubleFunction toDoubleFunction(Metric metric) {
return (o) -> {
if (metric.metricValue() instanceof Number) {
return ((Number) metric.metricValue()).doubleValue();
}
else {
return 0.0;
}
};
}
};
}
this.meterBinder.bindTo(this.meterRegistry);
}
private static String sanitize(String value) {
return value.replaceAll("-", ".");
}
public void addMetrics(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
synchronized (KafkaStreamsBinderMetrics.this) {
this.bindTo(streamsBuilderFactoryBeans, this.meterRegistry);
}
}
}

View File

@@ -16,46 +16,65 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Constructor;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.Binder;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.serde.MessageConverterDelegateSerde;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.integration.context.IntegrationContextUtils;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.lang.Nullable;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
import org.springframework.util.StringUtils;
/**
@@ -65,6 +84,7 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Gary Russell
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
@ConditionalOnBean(BindingService.class)
@AutoConfigureAfter(BindingServiceConfiguration.class)
@@ -80,7 +100,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(
KafkaProperties kafkaProperties, ConfigurableEnvironment environment,
BindingServiceProperties properties) {
BindingServiceProperties properties, ConfigurableApplicationContext context) throws Exception {
final Map<String, BinderConfiguration> binderConfigurations = getBinderConfigurations(
properties);
for (Map.Entry<String, BinderConfiguration> entry : binderConfigurations
@@ -93,7 +113,19 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
Map<String, Object> binderProperties = new HashMap<>();
this.flatten(null, binderConfiguration.getProperties(), binderProperties);
environment.getPropertySources().addFirst(
new MapPropertySource("kafkaStreamsBinderEnv", binderProperties));
new MapPropertySource(entry.getKey() + "-kafkaStreamsBinderEnv", binderProperties));
Binder binder = new Binder(ConfigurationPropertySources.get(environment),
new PropertySourcesPlaceholdersResolver(environment),
IntegrationUtils.getConversionService(context.getBeanFactory()), null);
final Constructor<KafkaStreamsBinderConfigurationProperties> kafkaStreamsBinderConfigurationPropertiesConstructor =
ReflectionUtils.accessibleConstructor(KafkaStreamsBinderConfigurationProperties.class, KafkaProperties.class);
final KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties =
BeanUtils.instantiateClass(kafkaStreamsBinderConfigurationPropertiesConstructor, kafkaProperties);
final BindResult<KafkaStreamsBinderConfigurationProperties> bind = binder.bind("spring.cloud.stream.kafka.streams.binder", Bindable.ofInstance(kafkaStreamsBinderConfigurationProperties));
context.getBeanFactory().registerSingleton(
entry.getKey() + "-KafkaStreamsBinderConfigurationProperties",
bind.get());
}
}
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
@@ -134,7 +166,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(
KafkaStreamsBinderConfigurationProperties properties,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties properties,
Environment environment) {
KafkaProperties kafkaProperties = properties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
@@ -150,15 +182,32 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(
KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment,
SendToDlqAndContinue sendToDlqAndContinue) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
String kafkaConnectionString = configProperties.getKafkaConnectionString();
if (kafkaConnectionString != null && kafkaConnectionString.equals("localhost:9092")) {
//Making sure that the application indeed set a property.
String kafkaStreamsBinderBroker = environment.getProperty("spring.cloud.stream.kafka.streams.binder.brokers");
if (StringUtils.isEmpty(kafkaStreamsBinderBroker)) {
//Kafka Streams binder specific property for brokers is not set by the application.
//See if there is one configured at the kafka binder level.
String kafkaBinderBroker = environment.getProperty("spring.cloud.stream.kafka.binder.brokers");
if (!StringUtils.isEmpty(kafkaBinderBroker)) {
kafkaConnectionString = kafkaBinderBroker;
configProperties.setBrokers(kafkaConnectionString);
}
}
}
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
else {
Object bootstrapServerConfig = properties
@@ -169,7 +218,14 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
}
else if (bootstrapServerConfig instanceof List) {
List bootStrapCollection = (List) bootstrapServerConfig;
if (bootStrapCollection.size() == 1 && bootStrapCollection.get(0).equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
}
}
@@ -186,22 +242,23 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
Serdes.ByteArraySerde.class.getName());
if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class.getName());
LogAndContinueExceptionHandler.class);
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class.getName());
LogAndFailExceptionHandler.class);
}
else if (configProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
properties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class.getName());
RecoveringDeserializationExceptionHandler.class);
properties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
@@ -233,62 +290,59 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
ObjectProvider<CleanupConfig> cleanupConfig) {
ObjectProvider<CleanupConfig> cleanupConfig,
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
cleanupConfig.getIfUnique());
}
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
FunctionCatalog functionCatalog, BindableProxyFactory bindableProxyFactory) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), functionCatalog, bindableProxyFactory);
cleanupConfig.getIfUnique(), customizerProvider.getIfUnique(), environment);
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverterFactory, sendToDlqAndContinue,
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverter, sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public MessageConverterDelegateSerde messageConverterDelegateSerde(
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new MessageConverterDelegateSerde(compositeMessageConverterFactory);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
CompositeMessageConverter compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
return new KStreamBoundElementFactory(bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue);
KafkaStreamsBindingInformationCatalogue, encodingDecodingBindAdviceHandler);
}
@Bean
public KTableBoundElementFactory kTableBoundElementFactory(
BindingServiceProperties bindingServiceProperties) {
return new KTableBoundElementFactory(bindingServiceProperties);
BindingServiceProperties bindingServiceProperties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
return new KTableBoundElementFactory(bindingServiceProperties, encodingDecodingBindAdviceHandler);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(
BindingServiceProperties properties) {
return new GlobalKTableBoundElementFactory(properties);
BindingServiceProperties properties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
return new GlobalKTableBoundElementFactory(properties, encodingDecodingBindAdviceHandler);
}
@Bean
@@ -306,7 +360,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@ConditionalOnMissingBean
public KeyValueSerdeResolver keyValueSerdeResolver(
@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties properties) {
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new KeyValueSerdeResolver(
(Map<String, Object>) streamConfigGlobalProperties, properties);
}
@@ -314,7 +368,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public InteractiveQueryService interactiveQueryServices(
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties properties) {
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties properties) {
return new InteractiveQueryService(kafkaStreamsRegistry, properties);
}
@@ -326,13 +380,59 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry);
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics);
}
@Bean("kafkaStreamsDlqDispatchers")
public Map<String, KafkaStreamsDlqDispatch> dlqDispatchers() {
return new HashMap<>();
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
StreamFunctionProperties streamFunctionProperties,
@Qualifier("binderConfigurationProperties") KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider, ConfigurableEnvironment environment) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), streamFunctionProperties, kafkaStreamsBinderConfigurationProperties,
customizerProvider.getIfUnique(), environment);
}
@Bean
public EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler() {
return new EncodingDecodingBindAdviceHandler();
}
@Configuration
@ConditionalOnMissingBean(value = KafkaStreamsBinderMetrics.class, name = "outerContext")
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfiguration {
@Bean
@ConditionalOnBean(MeterRegistry.class)
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
return new KafkaStreamsBinderMetrics(meterRegistry);
}
}
@Configuration
@ConditionalOnBean(name = "outerContext")
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
protected class KafkaStreamsBinderMetricsConfigurationWithMultiBinder {
@Bean
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(ConfigurableApplicationContext context) {
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
.getBean(MeterRegistry.class);
return new KafkaStreamsBinderMetrics(meterRegistry);
}
}
}

View File

@@ -16,28 +16,46 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import java.util.function.BiFunction;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Common methods used by various Kafka Streams types across the binders.
*
* @author Soby Chacko
* @author Gary Russell
*/
final class KafkaStreamsBinderUtils {
private static final Log LOGGER = LogFactory.getLog(KafkaStreamsBinderUtils.class);
private KafkaStreamsBinderUtils() {
}
@@ -45,12 +63,19 @@ final class KafkaStreamsBinderUtils {
static void prepareConsumerBinding(String name, String group,
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties =
(ExtendedConsumerProperties) properties;
if (binderConfigurationProperties
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
// check for deserialization handler at the consumer binding, as that takes precedence.
final DeserializationExceptionHandler deserializationExceptionHandler =
properties.getExtension().getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
@@ -61,51 +86,88 @@ final class KafkaStreamsBinderUtils {
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils
Map<String, DlqPartitionFunction> partitionFunctions =
context.getBeansOfType(DlqPartitionFunction.class, false, false);
boolean oneFunctionPresent = partitionFunctions.size() == 1;
Integer dlqPartitions = extendedConsumerProperties.getExtension().getDlqPartitions();
DlqPartitionFunction partitionFunction = oneFunctionPresent
? partitionFunctions.values().iterator().next()
: DlqPartitionFunction.determineFallbackFunction(dlqPartitions, LOGGER);
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
extendedConsumerProperties.getExtension().getDlqProducerProperties()),
binderConfigurationProperties);
KafkaTemplate<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver =
(cr, e) -> new TopicPartition(extendedConsumerProperties.getExtension().getDlqName(),
partitionFunction.apply(group, cr, e));
DeadLetterPublishingRecoverer kafkaStreamsBinderDlqRecoverer = !StringUtils
.isEmpty(extendedConsumerProperties.getExtension().getDlqName())
? new KafkaStreamsDlqDispatch(
extendedConsumerProperties.getExtension()
.getDlqName(),
binderConfigurationProperties,
extendedConsumerProperties.getExtension())
: null;
? new DeadLetterPublishingRecoverer(kafkaTemplate, destinationResolver)
: null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(
extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName,
binderConfigurationProperties,
extendedConsumerProperties.getExtension());
destinationResolver = (cr, e) -> new TopicPartition("error." + inputTopic + "." + group,
partitionFunction.apply(group, cr, e));
kafkaStreamsBinderDlqRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
destinationResolver);
}
SendToDlqAndContinue sendToDlqAndContinue = context
.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic,
kafkaStreamsDlqDispatch);
kafkaStreamsDlqDispatchers.put(inputTopic, kafkaStreamsDlqDispatch);
kafkaStreamsBinderDlqRecoverer);
}
}
}
private static DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
static boolean supportsKStream(MethodParameter methodParameter, Class<?> targetBeanClass) {
return KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered first and is independent from the parent context.
GenericApplicationContext outerContext = (GenericApplicationContext) beanFactory
.getBean("outerContext");
outerContext.registerBean(KafkaStreamsBinderConfigurationProperties.class,
() -> outerContext.getBean(KafkaStreamsBinderConfigurationProperties.class));
outerContext.registerBean(KafkaStreamsBindingInformationCatalogue.class,
() -> outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
}

View File

@@ -16,11 +16,13 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
@@ -49,6 +51,8 @@ class KafkaStreamsBindingInformationCatalogue {
private ResolvableType outboundKStreamResolvable;
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
* broker.
@@ -132,4 +136,29 @@ class KafkaStreamsBindingInformationCatalogue {
ResolvableType getOutboundKStreamResolvable() {
return outboundKStreamResolvable;
}
/**
* Adding a mapping for KStream target to its corresponding KeySerde.
* This is used for sending to DLQ when deserialization fails. See {@link KafkaStreamsMessageConversionDelegate}
* for details.
*
* @param kStreamTarget target KStream
* @param keySerde Serde used for the key
*/
void addKeySerde(KStream<?, ?> kStreamTarget, Serde<?> keySerde) {
this.keySerdeInfo.put(kStreamTarget, keySerde);
}
Serde<?> getKeySerde(KStream<?, ?> kStreamTarget) {
return this.keySerdeInfo.get(kStreamTarget);
}
Map<KStream<?, ?>, BindingProperties> getBindingProperties() {
return bindingProperties;
}
Map<KStream<?, ?>, KafkaStreamsConsumerProperties> getConsumerProperties() {
return consumerProperties;
}
}

View File

@@ -1,152 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.ObjectUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
/**
* Send records in error to a DLQ.
*
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
class KafkaStreamsDlqDispatch {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaTemplate<byte[], byte[]> kafkaTemplate;
private final String dlqName;
KafkaStreamsDlqDispatch(String dlqName,
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(
kafkaConsumerProperties.getDlqProducerProperties()),
kafkaBinderConfigurationProperties);
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
this.dlqName = dlqName;
}
@SuppressWarnings("unchecked")
public void sendToDlq(byte[] key, byte[] value, int partittion) {
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(this.dlqName,
partittion, key, value, null);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value))).append("'")
.append(" received from ").append(partittion);
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = null;
try {
sentDlq = this.kafkaTemplate.send(producerRecord);
sentDlq.addCallback(
new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[], byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger
.debug("Sent to DLQ " + sb.toString());
}
}
});
}
catch (Exception ex) {
if (sentDlq == null) {
KafkaStreamsDlqDispatch.this.logger
.error("Error sending to DLQ " + sb.toString(), ex);
}
}
}
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties
.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
// Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
private String toDisplayString(String original) {
if (original.length() <= 50) {
return original;
}
return original.substring(0, 50) + "...";
}
}

View File

@@ -16,13 +16,17 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
@@ -31,35 +35,41 @@ import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.function.core.FluxedConsumer;
import org.springframework.cloud.function.core.FluxedFunction;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderProcessor {
public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderProcessor implements BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionProcessor.class);
private static final String OUTBOUND = "outbound";
private final BindingServiceProperties bindingServiceProperties;
private final Map<String, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
@@ -67,12 +77,12 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final FunctionCatalog functionCatalog;
private Set<String> origInputs = new TreeSet<>();
private Set<String> origOutputs = new TreeSet<>();
private ResolvableType outboundResolvableType;
private BeanFactory beanFactory;
private StreamFunctionProperties streamFunctionProperties;
private KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties;
StreamsBuilderFactoryBeanCustomizer customizer;
ConfigurableEnvironment environment;
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@@ -80,8 +90,9 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
CleanupConfig cleanupConfig,
FunctionCatalog functionCatalog,
BindableProxyFactory bindableProxyFactory) {
StreamFunctionProperties streamFunctionProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanCustomizer customizer, ConfigurableEnvironment environment) {
super(bindingServiceProperties, kafkaStreamsBindingInformationCatalogue, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
@@ -89,37 +100,57 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.functionCatalog = functionCatalog;
this.origInputs.addAll(bindableProxyFactory.getInputs());
this.origOutputs.addAll(bindableProxyFactory.getOutputs());
this.streamFunctionProperties = streamFunctionProperties;
this.kafkaStreamsBinderConfigurationProperties = kafkaStreamsBinderConfigurationProperties;
this.customizer = customizer;
this.environment = environment;
}
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType) {
int inputCount = 1;
ResolvableType resolvableTypeGeneric = resolvableType.getGeneric(1);
while (resolvableTypeGeneric != null && resolvableTypeGeneric.getRawClass() != null && (functionOrConsumerFound(resolvableTypeGeneric))) {
inputCount++;
resolvableTypeGeneric = resolvableTypeGeneric.getGeneric(1);
}
final Set<String> inputs = new TreeSet<>(origInputs);
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
final Iterator<String> iterator = inputs.iterator();
if (resolvableType != null && resolvableType.getRawClass() != null) {
int inputCount = 1;
popuateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
ResolvableType iterableResType = resolvableType;
for (int i = 1; i < inputCount; i++) {
if (iterator.hasNext()) {
iterableResType = iterableResType.getGeneric(1);
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
popuateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
ResolvableType currentOutputGeneric;
if (resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class)) {
inputCount = 2;
currentOutputGeneric = resolvableType.getGeneric(2);
}
else {
currentOutputGeneric = resolvableType.getGeneric(1);
}
while (currentOutputGeneric.getRawClass() != null && functionOrConsumerFound(currentOutputGeneric)) {
inputCount++;
currentOutputGeneric = currentOutputGeneric.getGeneric(1);
}
final Set<String> inputs = new LinkedHashSet<>(kafkaStreamsBindableProxyFactory.getInputs());
final Iterator<String> iterator = inputs.iterator();
popuateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
ResolvableType iterableResType = resolvableType;
int i = resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class) ? 2 : 1;
ResolvableType outboundResolvableType;
if (i == inputCount) {
outboundResolvableType = iterableResType.getGeneric(i);
}
else {
while (i < inputCount && iterator.hasNext()) {
iterableResType = iterableResType.getGeneric(1);
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
popuateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
i++;
}
outboundResolvableType = iterableResType.getGeneric(1);
}
resolvableTypeMap.put(OUTBOUND, outboundResolvableType);
}
outboundResolvableType = iterableResType.getGeneric(1);
return resolvableTypeMap;
}
@@ -128,38 +159,53 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
iterableResType.getRawClass().equals(Consumer.class);
}
private void popuateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap, Iterator<String> iterator) {
private void popuateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
Iterator<String> iterator) {
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
origInputs.remove(next);
if (resolvableType.getRawClass() != null &&
(resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class))
&& iterator.hasNext()) {
resolvableTypeMap.put(iterator.next(), resolvableType.getGeneric(1));
}
}
/**
* This method must be kept stateless. In the case of multiple function beans in an application,
* isolated {@link KafkaStreamsBindableProxyFactory} instances are passed in separately for those functions. If the
* state is shared between invocations, that will create potential race conditions. Hence, invocations of this method
* should not be dependent on state modified by a previous invocation.
*
* @param resolvableType type of the binding
* @param functionName bean name of the function
* @param kafkaStreamsBindableProxyFactory bindable proxy factory for the Kafka Streams type
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType);
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName,
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType, kafkaStreamsBindableProxyFactory);
ResolvableType outboundResolvableType = stringResolvableTypeMap.remove(OUTBOUND);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
FluxedConsumer fluxedConsumer = functionCatalog.lookup(FluxedConsumer.class, functionName);
Assert.isTrue(fluxedConsumer != null,
"No corresponding consumer beans found in the catalog");
Object target = fluxedConsumer.getTarget();
Consumer<Object> consumer = Consumer.class.isAssignableFrom(target.getClass()) ? (Consumer) target : null;
if (consumer != null) {
consumer.accept(adaptedInboundArguments[0]);
}
Consumer<Object> consumer = (Consumer) this.beanFactory.getBean(functionName);
consumer.accept(adaptedInboundArguments[0]);
}
else if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiConsumer.class)) {
BiConsumer<Object, Object> biConsumer = (BiConsumer) this.beanFactory.getBean(functionName);
biConsumer.accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Function<Object, Object> function = functionCatalog.lookup(Function.class, functionName);
Object target = null;
if (function instanceof FluxedFunction) {
target = ((FluxedFunction) function).getTarget();
Object result;
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiFunction.class)) {
BiFunction<Object, Object, Object> biFunction = (BiFunction) beanFactory.getBean(functionName);
result = biFunction.apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Function<Object, Object> function = (Function) beanFactory.getBean(functionName);
result = function.apply(adaptedInboundArguments[0]);
}
function = (Function) target;
Assert.isTrue(function != null, "Function bean cannot be null");
Object result = function.apply(adaptedInboundArguments[0]);
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
@@ -174,36 +220,40 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
if (result != null) {
kafkaStreamsBindingInformationCatalogue.setOutboundKStreamResolvable(
outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
final Set<String> outputs = new TreeSet<>(origOutputs);
final Set<String> outputs = new TreeSet<>(kafkaStreamsBindableProxyFactory.getOutputs());
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
if (result.getClass().isArray()) {
// Binding target as the output bindings were deferred in the KafkaStreamsBindableProxyFactory
// due to the fact that it didn't know the returned array size. At this point in the execution,
// we know exactly the number of outbound components (from the array length), so do the binding.
final int length = ((Object[]) result).length;
String[] methodAnnotatedOutboundNames = new String[length];
for (int j = 0; j < length; j++) {
if (outboundDefinitionIterator.hasNext()) {
final String next = outboundDefinitionIterator.next();
methodAnnotatedOutboundNames[j] = next;
this.origOutputs.remove(next);
}
}
List<String> outputBindings = getOutputBindings(functionName, length);
Iterator<String> iterator = outputBindings.iterator();
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
Object[] outboundKStreams = (Object[]) result;
int k = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[k++]);
for (int ij = 0; ij < length; ij++) {
String next = iterator.next();
kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
registry.registerBeanDefinition(next, rootBeanDefinition1);
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) outboundKStream);
boundElement.wrap((KStream) outboundKStreams[ij]);
}
}
else {
if (outboundDefinitionIterator.hasNext()) {
final String next = outboundDefinitionIterator.next();
Object targetBean = this.applicationContext.getBean(next);
this.origOutputs.remove(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
@@ -217,6 +267,22 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
}
}
private List<String> getOutputBindings(String functionName, int outputs) {
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings(functionName);
List<String> outputBindingNames = new ArrayList<>();
if (!CollectionUtils.isEmpty(outputBindings)) {
outputBindingNames.addAll(outputBindings);
return outputBindingNames;
}
else {
for (int i = 0; i < outputs; i++) {
outputBindingNames.add(String.format("%s-%s-%d", functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX, i));
}
}
return outputBindingNames;
}
@SuppressWarnings({"unchecked"})
private Object[] adaptAndRetrieveInboundArguments(Map<String, ResolvableType> stringResolvableTypeMap,
String functionName) {
@@ -231,30 +297,36 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig("stream-builder-" + functionName, applicationContext, input);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext,
input, kafkaStreamsBinderConfigurationProperties, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
this.methodStreamsBuilderFactoryBeanMap.get(functionName);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties =
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
extendedConsumerProperties.setApplicationId(applicationId);
//get state store spec
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties, stringResolvableTypeMap.get(input));
LOG.info("Key Serde used for " + input + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(input).isUseNativeDecoding() ?
getValueSerde(input, extendedConsumerProperties, stringResolvableTypeMap.get(input)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + input + ": " + valueSerde.getClass().getName());
final Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(input, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(input, bindingProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset);
KStream<?, ?> stream = getKStream(input, bindingProperties, extendedConsumerProperties,
streamsBuilder, keySerde, valueSerde, autoOffsetReset, i == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper =
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
@@ -262,7 +334,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
(stringResolvableTypeMap.get(input).getGeneric(1).getRawClass() != null)
? (stringResolvableTypeMap.get(input).getGeneric(1).getRawClass()) : Object.class;
if (this.kafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(
(KStream) kStreamWrapper)) {
(KStream<?, ?>) kStreamWrapper)) {
arguments[i] = stream;
}
else {
@@ -293,44 +365,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
return arguments;
}
private KStream<?, ?> getkStream(String inboundName,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream =
streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties.getConsumerProperties(inboundName)
.isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName + ". " +
"Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName + ". " +
"Inbound message conversion done by Spring Cloud Stream.");
}
return getkStream(bindingProperties, stream, nativeDecoding);
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
}

View File

@@ -22,18 +22,21 @@ import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
@@ -55,7 +58,7 @@ public class KafkaStreamsMessageConversionDelegate {
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
private final CompositeMessageConverterFactory compositeMessageConverterFactory;
private final CompositeMessageConverter compositeMessageConverter;
private final SendToDlqAndContinue sendToDlqAndContinue;
@@ -63,12 +66,14 @@ public class KafkaStreamsMessageConversionDelegate {
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
Exception[] failedWithDeserException = new Exception[1];
KafkaStreamsMessageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
this.compositeMessageConverter = compositeMessageConverter;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
this.kstreamBinderConfigurationProperties = kstreamBinderConfigurationProperties;
@@ -83,22 +88,23 @@ public class KafkaStreamsMessageConversionDelegate {
public KStream serializeOnOutbound(KStream<?, ?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue
.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget
.filter((k, v) -> v != null)
.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
kStreamWithEnrichedHeaders.process(() -> new Processor() {
@@ -146,8 +152,7 @@ public class KafkaStreamsMessageConversionDelegate {
@SuppressWarnings({ "unchecked", "rawtypes" })
public KStream deserializeOnInbound(Class<?> valueClass,
KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
resolvePerRecordContentType(bindingTarget, perRecordContentTypeHolder);
@@ -200,6 +205,7 @@ public class KafkaStreamsMessageConversionDelegate {
"Deserialization has failed. This will be skipped from further processing.",
e);
// pass through
failedWithDeserException[0] = e;
}
return isValidRecord;
},
@@ -207,7 +213,7 @@ public class KafkaStreamsMessageConversionDelegate {
// in the first filter above.
(k, v) -> true);
// process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1]);
processErrorFromDeserialization(bindingTarget, branch[1], failedWithDeserException);
// first branch above is the branch where the messages are converted, let it go
// through further processing.
@@ -264,7 +270,7 @@ public class KafkaStreamsMessageConversionDelegate {
@SuppressWarnings({ "unchecked", "rawtypes" })
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget,
KStream<?, ?> branch) {
KStream<?, ?> branch, Exception[] exception) {
branch.process(() -> new Processor() {
ProcessorContext context;
@@ -279,18 +285,25 @@ public class KafkaStreamsMessageConversionDelegate {
if (o2 != null) {
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue
.isDlqEnabled(bindingTarget)) {
String destination = this.context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
// We need to convert the key to a byte[] before sending to DLQ.
Serde keySerde = kstreamBindingInformationCatalogue.getKeySerde(bindingTarget);
Serializer keySerializer = keySerde.serializer();
byte[] keyBytes = keySerializer.serialize(null, o);
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
keyBytes, message.getPayload());
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o,
(byte[]) message.getPayload(),
this.context.partition());
.sendToDlq(consumerRecord, exception[0]);
}
else {
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
o, o2);
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o, (byte[]) o2,
this.context.partition());
.sendToDlq(consumerRecord, exception[0]);
}
}
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties

View File

@@ -16,10 +16,18 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* An internal registry for holding {@KafkaStreams} objects maintained through
@@ -27,7 +35,9 @@ import org.apache.kafka.streams.KafkaStreams;
*
* @author Soby Chacko
*/
class KafkaStreamsRegistry {
public class KafkaStreamsRegistry {
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
@@ -37,10 +47,35 @@ class KafkaStreamsRegistry {
/**
* Register the {@link KafkaStreams} object created in the application.
* @param kafkaStreams {@link KafkaStreams} object created in the application
* @param streamsBuilderFactoryBean {@link StreamsBuilderFactoryBean}
*/
void registerKafkaStreams(KafkaStreams kafkaStreams) {
void registerKafkaStreams(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
this.kafkaStreams.add(kafkaStreams);
this.streamsBuilderFactoryBeanMap.put(kafkaStreams, streamsBuilderFactoryBean);
}
/**
*
* @param kafkaStreams {@link KafkaStreams} object
* @return Corresponding {@link StreamsBuilderFactoryBean}.
*/
StreamsBuilderFactoryBean streamBuilderFactoryBean(KafkaStreams kafkaStreams) {
return this.streamsBuilderFactoryBeanMap.get(kafkaStreams);
}
public StreamsBuilderFactoryBean streamsBuilderFactoryBean(String applicationId) {
final Optional<StreamsBuilderFactoryBean> first = this.streamsBuilderFactoryBeanMap.values()
.stream()
.filter(streamsBuilderFactoryBean -> streamsBuilderFactoryBean
.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG)
.equals(applicationId))
.findFirst();
return first.orElse(null);
}
public List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans() {
return new ArrayList<>(this.streamsBuilderFactoryBeanMap.values());
}
}

View File

@@ -18,7 +18,6 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
@@ -31,7 +30,6 @@ import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
@@ -55,7 +53,9 @@ import org.springframework.context.ApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.util.Assert;
@@ -100,6 +100,10 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
StreamsBuilderFactoryBeanCustomizer customizer;
private final ConfigurableEnvironment environment;
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
@@ -107,7 +111,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig) {
CleanupConfig cleanupConfig,
StreamsBuilderFactoryBeanCustomizer customizer,
ConfigurableEnvironment environment) {
super(bindingServiceProperties, bindingInformationCatalogue, extendedBindingProperties, keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
@@ -115,6 +121,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = listenerResultAdapters;
this.customizer = customizer;
this.environment = environment;
}
@Override
@@ -246,40 +254,49 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(method.getDeclaringClass().getSimpleName() + "-" + method.getName(),
applicationContext,
inboundName);
inboundName, null, customizer, this.environment, bindingProperties);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
.get(method);
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
extendedConsumerProperties.setApplicationId(applicationId);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter));
LOG.info("Key Serde used for " + targetReferenceValue + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding() ?
getValueSerde(inboundName, extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + targetReferenceValue + ": " + valueSerde.getClass().getName());
Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(inboundName, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec,
bindingProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset);
bindingProperties, extendedConsumerProperties, streamsBuilder, keySerde, valueSerde,
autoOffsetReset, parameterIndex == 0);
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde(stream, keySerde);
BindingProperties bindingProperties1 = this.kafkaStreamsBindingInformationCatalogue.getBindingProperties().get(kStreamWrapper);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(kStreamWrapper, methodParameter);
.adapt(stream, methodParameter);
break;
}
}
@@ -354,9 +371,10 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
private KStream<?, ?> getkStream(String inboundName,
KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
BindingProperties bindingProperties,
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde,
Topology.AutoOffsetReset autoOffsetReset) {
Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (storeSpec != null) {
StoreBuilder storeBuilder = buildStateStore(storeSpec);
streamsBuilder.addStateStore(storeBuilder);
@@ -364,24 +382,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
}
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
KStream<?, ?> stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
return getkStream(bindingProperties, stream, nativeDecoding);
return getKStream(inboundName, bindingProperties, kafkaStreamsConsumerProperties, streamsBuilder,
keySerde, valueSerde, autoOffsetReset, firstBuild);
}
private void validateStreamListenerMethod(StreamListener streamListener,

View File

@@ -16,8 +16,14 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Utils;
@@ -25,13 +31,19 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.util.ClassUtils;
import org.springframework.util.StringUtils;
/**
@@ -58,12 +70,16 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Lei Chen
*/
public class KeyValueSerdeResolver {
public class KeyValueSerdeResolver implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(KeyValueSerdeResolver.class);
private final Map<String, Object> streamConfigGlobalProperties;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private ConfigurableApplicationContext context;
KeyValueSerdeResolver(Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.streamConfigGlobalProperties = streamConfigGlobalProperties;
@@ -109,7 +125,6 @@ public class KeyValueSerdeResolver {
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -130,7 +145,6 @@ public class KeyValueSerdeResolver {
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -170,7 +184,6 @@ public class KeyValueSerdeResolver {
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -189,7 +202,6 @@ public class KeyValueSerdeResolver {
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
@@ -248,10 +260,11 @@ public class KeyValueSerdeResolver {
if (resolvableType != null &&
(isResolvalbeKafkaStreamsType(resolvableType) || isResolvableKStreamArrayType(resolvableType))) {
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(0) : resolvableType.getGeneric(0);
keySerde = getSerde(generic);
Serde<?> fallbackSerde = getFallbackSerde("default.key.serde");
keySerde = getSerde(generic, fallbackSerde);
}
if (keySerde == null) {
keySerde = getFallbackSerde("default.key.serde");
keySerde = Serdes.ByteArray();
}
}
keySerde.configure(this.streamConfigGlobalProperties, true);
@@ -272,40 +285,100 @@ public class KeyValueSerdeResolver {
GlobalKTable.class.isAssignableFrom(resolvableType.getRawClass()));
}
private Serde<?> getSerde(ResolvableType generic) {
private Serde<?> getSerde(ResolvableType generic, Serde<?> fallbackSerde) {
Serde<?> serde = null;
if (generic.getRawClass() != null) {
if (Integer.class.isAssignableFrom(generic.getRawClass())) {
Map<String, Serde> beansOfType = context.getBeansOfType(Serde.class);
Serde<?>[] serdeBeans = new Serde<?>[1];
final Class<?> genericRawClazz = generic.getRawClass();
beansOfType.forEach((k, v) -> {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(k))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method[] methods = classObj.getMethods();
Optional<Method> serdeBeanMethod = Arrays.stream(methods).filter(m -> m.getName().equals(k)).findFirst();
if (serdeBeanMethod.isPresent()) {
Method method = serdeBeanMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
ResolvableType serdeBeanGeneric = resolvableType.getGeneric(0);
Class<?> serdeGenericRawClazz = serdeBeanGeneric.getRawClass();
if (serdeGenericRawClazz != null && genericRawClazz != null) {
if (serdeGenericRawClazz.isAssignableFrom(genericRawClazz)) {
serdeBeans[0] = v;
}
}
}
}
catch (Exception e) {
// Pass through...
}
});
if (serdeBeans[0] != null) {
return serdeBeans[0];
}
if (genericRawClazz != null) {
if (Integer.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Integer();
}
else if (Long.class.isAssignableFrom(generic.getRawClass())) {
else if (Long.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Long();
}
else if (Short.class.isAssignableFrom(generic.getRawClass())) {
else if (Short.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Short();
}
else if (Double.class.isAssignableFrom(generic.getRawClass())) {
else if (Double.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Double();
}
else if (Float.class.isAssignableFrom(generic.getRawClass())) {
else if (Float.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.Float();
}
else if (byte[].class.isAssignableFrom(generic.getRawClass())) {
else if (byte[].class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.ByteArray();
}
else if (String.class.isAssignableFrom(generic.getRawClass())) {
else if (String.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.String();
}
else if (UUID.class.isAssignableFrom(genericRawClazz)) {
serde = Serdes.UUID();
}
else if (!isSerdeFromStandardDefaults(fallbackSerde)) {
//User purposely set a default serde that is not one of the above
serde = fallbackSerde;
}
else {
// If the type is Object, then skip assigning the JsonSerde and let the fallback mechanism takes precedence.
if (!generic.getRawClass().isAssignableFrom((Object.class))) {
serde = new JsonSerde(generic.getRawClass());
if (!genericRawClazz.isAssignableFrom((Object.class))) {
serde = new JsonSerde(genericRawClazz);
}
}
}
return serde;
}
private boolean isSerdeFromStandardDefaults(Serde<?> serde) {
if (serde != null) {
if (Number.class.isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.ByteArray().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.String().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
else if (Serdes.UUID().getClass().isAssignableFrom(serde.getClass())) {
return true;
}
}
return false;
}
private Serde<?> getValueSerde(String valueSerdeString)
throws ClassNotFoundException {
@@ -316,6 +389,7 @@ public class KeyValueSerdeResolver {
else {
valueSerde = getFallbackSerde("default.value.serde");
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
return valueSerde;
}
@@ -339,16 +413,21 @@ public class KeyValueSerdeResolver {
if (resolvableType != null && ((isResolvalbeKafkaStreamsType(resolvableType)) ||
(isResolvableKStreamArrayType(resolvableType)))) {
Serde<?> fallbackSerde = getFallbackSerde("default.value.serde");
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(1) : resolvableType.getGeneric(1);
valueSerde = getSerde(generic);
valueSerde = getSerde(generic, fallbackSerde);
}
if (valueSerde == null) {
valueSerde = getFallbackSerde("default.value.serde");
valueSerde = Serdes.ByteArray();
}
}
valueSerde.configure(streamConfigGlobalProperties, false);
return valueSerde;
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
context = (ConfigurableApplicationContext) applicationContext;
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright 2019-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.2
*/
@Configuration
public class MultiBinderPropertiesConfiguration {
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
@ConditionalOnBean(name = "outerContext")
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
}

View File

@@ -16,109 +16,48 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.internals.ProcessorContextImpl;
import org.apache.kafka.streams.processor.internals.StreamTask;
import org.springframework.util.ReflectionUtils;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
/**
* Custom implementation for {@link DeserializationExceptionHandler} that sends the
* records in error to a DLQ topic, then continue stream processing on new records.
* Custom implementation for {@link ConsumerRecordRecoverer} that keeps a collection of
* recoverer objects per input topics. These topics might be per input binding or multiplexed
* topics in a single binding.
*
* @author Soby Chacko
* @since 2.0.0
*/
public class SendToDlqAndContinue implements DeserializationExceptionHandler {
/**
* Key used for DLQ dispatchers.
*/
public static final String KAFKA_STREAMS_DLQ_DISPATCHERS = "spring.cloud.stream.kafka.streams.dlq.dispatchers";
public class SendToDlqAndContinue implements ConsumerRecordRecoverer {
/**
* DLQ dispatcher per topic in the application context. The key here is not the actual
* DLQ topic but the incoming topic that caused the error.
*/
private Map<String, KafkaStreamsDlqDispatch> dlqDispatchers = new HashMap<>();
private Map<String, DeadLetterPublishingRecoverer> dlqDispatchers = new HashMap<>();
/**
* For a given topic, send the key/value record to DLQ topic.
* @param topic incoming topic that caused the error
* @param key to send
* @param value to send
* @param partition for the topic where this record should be sent
*
* @param consumerRecord consumer record
* @param exception exception
*/
public void sendToDlq(String topic, byte[] key, byte[] value, int partition) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(topic);
kafkaStreamsDlqDispatch.sendToDlq(key, value, partition);
}
@Override
@SuppressWarnings("unchecked")
public DeserializationHandlerResponse handle(ProcessorContext context,
ConsumerRecord<byte[], byte[]> record, Exception exception) {
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers
.get(record.topic());
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(),
record.partition());
context.commit();
// The following conditional block should be reconsidered when we have a solution
// for this SO problem:
// https://stackoverflow.com/questions/48470899/kafka-streams-deserialization-handler
// Currently it seems like when deserialization error happens, there is no commits
// happening and the
// following code will use reflection to get access to the underlying
// KafkaConsumer.
// It works with Kafka 1.0.0, but there is no guarantee it will work in future
// versions of kafka as
// we access private fields by name using reflection, but it is a temporary fix.
if (context instanceof ProcessorContextImpl) {
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl) context;
Field task = ReflectionUtils.findField(ProcessorContextImpl.class, "task");
ReflectionUtils.makeAccessible(task);
Object taskField = ReflectionUtils.getField(task, processorContextImpl);
if (taskField.getClass().isAssignableFrom(StreamTask.class)) {
StreamTask streamTask = (StreamTask) taskField;
Field consumer = ReflectionUtils.findField(StreamTask.class, "consumer");
ReflectionUtils.makeAccessible(consumer);
Object kafkaConsumerField = ReflectionUtils.getField(consumer,
streamTask);
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)) {
KafkaConsumer kafkaConsumer = (KafkaConsumer) kafkaConsumerField;
final Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>();
TopicPartition tp = new TopicPartition(record.topic(),
record.partition());
OffsetAndMetadata oam = new OffsetAndMetadata(record.offset() + 1);
consumedOffsetsAndMetadata.put(tp, oam);
kafkaConsumer.commitSync(consumedOffsetsAndMetadata);
}
}
}
return DeserializationHandlerResponse.CONTINUE;
}
@Override
@SuppressWarnings("unchecked")
public void configure(Map<String, ?> configs) {
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs
.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
public void sendToDlq(ConsumerRecord<?, ?> consumerRecord, Exception exception) {
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch = this.dlqDispatchers.get(consumerRecord.topic());
kafkaStreamsDlqDispatch.accept(consumerRecord, exception);
}
void addKStreamDlqDispatch(String topic,
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch) {
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch) {
this.dlqDispatchers.put(topic, kafkaStreamsDlqDispatch);
}
@Override
public void accept(ConsumerRecord<?, ?> consumerRecord, Exception e) {
this.dlqDispatchers.get(consumerRecord.topic()).accept(consumerRecord, e);
}
}

View File

@@ -40,14 +40,15 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics;
private volatile boolean running;
StreamsBuilderFactoryManager(
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry, KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
}
@Override
@@ -71,8 +72,10 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
.getStreamsBuilderFactoryBeans();
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(
streamsBuilderFactoryBean.getKafkaStreams());
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
}
this.running = true;
}

View File

@@ -0,0 +1,71 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import java.util.List;
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.boot.actuate.endpoint.annotation.Selector;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.util.StringUtils;
/**
* Actuator endpoint for topology description.
*
* @author Soby Chacko
* @since 3.0.4
*/
@Endpoint(id = "topology")
public class TopologyEndpoint {
/**
* Topology not found message.
*/
public static final String NO_TOPOLOGY_FOUND_MSG = "No topology found for the given application ID";
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public TopologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@ReadOperation
public String topology() {
final List<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsRegistry.streamsBuilderFactoryBeans();
final StringBuilder topologyDescription = new StringBuilder();
streamsBuilderFactoryBeans.stream()
.forEach(streamsBuilderFactoryBean ->
topologyDescription.append(streamsBuilderFactoryBean.getTopology().describe().toString()));
return topologyDescription.toString();
}
@ReadOperation
public String topology(@Selector String applicationId) {
if (!StringUtils.isEmpty(applicationId)) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamsBuilderFactoryBean(applicationId);
if (streamsBuilderFactoryBean != null) {
return streamsBuilderFactoryBean.getTopology().describe().toString();
}
else {
return NO_TOPOLOGY_FOUND_MSG;
}
}
return NO_TOPOLOGY_FOUND_MSG;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2020-2020 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.endpoint;
import org.springframework.boot.actuate.autoconfigure.endpoint.EndpointAutoConfiguration;
import org.springframework.boot.actuate.autoconfigure.endpoint.condition.ConditionalOnAvailableEndpoint;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 3.0.4
*/
@Configuration
@ConditionalOnClass(name = {
"org.springframework.boot.actuate.endpoint.annotation.Endpoint" })
@AutoConfigureAfter({EndpointAutoConfiguration.class, KafkaStreamsBinderSupportAutoConfiguration.class})
public class TopologyEndpointAutoConfiguration {
@Bean
@ConditionalOnAvailableEndpoint
public TopologyEndpoint topologyEndpoint(KafkaStreamsRegistry kafkaStreamsRegistry) {
return new TopologyEndpoint(kafkaStreamsRegistry);
}
}

View File

@@ -17,15 +17,22 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.factory.BeanFactoryUtils;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
@@ -33,55 +40,72 @@ import org.springframework.context.annotation.ConditionContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.util.ClassUtils;
import org.springframework.util.CollectionUtils;
/**
* Custom {@link org.springframework.context.annotation.Condition} that detects the presence
* of java.util.Function|Consumer beans. Used for Kafka Streams function support.
*
* @author Soby Chakco
* @author Soby Chacko
* @since 2.2.0
*/
public class FunctionDetectorCondition extends SpringBootCondition {
private static final Log LOG = LogFactory.getLog(FunctionDetectorCondition.class);
@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
if (context != null && context.getBeanFactory() != null) {
Map functionTypes = context.getBeanFactory().getBeansOfType(Function.class);
functionTypes.putAll(context.getBeanFactory().getBeansOfType(Consumer.class));
final Map<String, Object> kstreamFunctions = pruneFunctionBeansForKafkaStreams(functionTypes, context);
if (!kstreamFunctions.isEmpty()) {
return ConditionOutcome.match("Matched. Function/Consumer beans found");
String[] functionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Function.class, true, false);
String[] consumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), Consumer.class, true, false);
String[] biFunctionTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiFunction.class, true, false);
String[] biConsumerTypes = BeanFactoryUtils.beanNamesForTypeIncludingAncestors(context.getBeanFactory(), BiConsumer.class, true, false);
List<String> functionComponents = new ArrayList<>();
functionComponents.addAll(Arrays.asList(functionTypes));
functionComponents.addAll(Arrays.asList(consumerTypes));
functionComponents.addAll(Arrays.asList(biFunctionTypes));
functionComponents.addAll(Arrays.asList(biConsumerTypes));
List<String> kafkaStreamsFunctions = pruneFunctionBeansForKafkaStreams(functionComponents, context);
if (!CollectionUtils.isEmpty(kafkaStreamsFunctions)) {
return ConditionOutcome.match("Matched. Function/BiFunction/Consumer beans found");
}
else {
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
}
return ConditionOutcome.noMatch("No match. No Function/Consumer beans found");
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
private static <T> Map<String, T> pruneFunctionBeansForKafkaStreams(Map<String, T> originalFunctionBeans,
private static List<String> pruneFunctionBeansForKafkaStreams(List<String> strings,
ConditionContext context) {
final Map<String, T> prunedMap = new HashMap<>();
final List<String> prunedList = new ArrayList<>();
for (String key : originalFunctionBeans.keySet()) {
for (String key : strings) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedMap.put(key, originalFunctionBeans.get(key));
Method[] methods = classObj.getMethods();
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
if (kafkaStreamMethod.isPresent()) {
Method method = kafkaStreamMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedList.add(key);
}
}
}
catch (NoSuchMethodException e) {
//ignore
catch (Exception e) {
LOG.error("Function not found: " + key, e);
}
}
return prunedMap;
return prunedList;
}
}

View File

@@ -0,0 +1,236 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binding.AbstractBindableProxyFactory;
import org.springframework.cloud.stream.binding.BoundTargetHolder;
import org.springframework.cloud.stream.function.FunctionConstants;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
/**
* Kafka Streams specific target bindings proxy factory. See {@link AbstractBindableProxyFactory} for more details.
*
* Targets bound by this factory:
*
* {@link KStream}
* {@link KTable}
* {@link GlobalKTable}
*
* This class looks at the Function bean's return signature as {@link ResolvableType} and introspect the individual types,
* binding them on the way.
*
* All types on the {@link ResolvableType} are bound except for KStream[] array types on the outbound, which will be
* deferred for binding at a later stage. The reason for doing that is because in this class, we don't have any way to know
* the actual size in the returned array. That has to wait until the function is invoked and we get a result.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFactory implements InitializingBean, BeanFactoryAware {
@Autowired
private StreamFunctionProperties streamFunctionProperties;
private final ResolvableType type;
private final String functionName;
private BeanFactory beanFactory;
public KafkaStreamsBindableProxyFactory(ResolvableType type, String functionName) {
super(type.getType().getClass());
this.type = type;
this.functionName = functionName;
}
@Override
public void afterPropertiesSet() {
Assert.notEmpty(KafkaStreamsBindableProxyFactory.this.bindingTargetFactories,
"'bindingTargetFactories' cannot be empty");
int resolvableTypeDepthCounter = 0;
ResolvableType argument = this.type.getGeneric(resolvableTypeDepthCounter++);
List<String> inputBindings = buildInputBindings();
Iterator<String> iterator = inputBindings.iterator();
String next = iterator.next();
bindInput(argument, next);
if (this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class))) {
argument = this.type.getGeneric(resolvableTypeDepthCounter++);
next = iterator.next();
bindInput(argument, next);
}
ResolvableType outboundArgument = this.type.getGeneric(resolvableTypeDepthCounter);
while (isAnotherFunctionOrConsumerFound(outboundArgument)) {
//The function is a curried function. We should introspect the partial function chain hierarchy.
argument = outboundArgument.getGeneric(0);
String next1 = iterator.next();
bindInput(argument, next1);
outboundArgument = outboundArgument.getGeneric(1);
}
//Introspect output for binding.
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
outboundArgument.getRawClass().isAssignableFrom(KStream.class))) {
// if the type is array, we need to do a late binding as we don't know the number of
// output bindings at this point in the flow.
List<String> outputBindings = streamFunctionProperties.getOutputBindings(this.functionName);
String outputBinding = null;
if (!CollectionUtils.isEmpty(outputBindings)) {
Iterator<String> outputBindingsIter = outputBindings.iterator();
if (outputBindingsIter.hasNext()) {
outputBinding = outputBindingsIter.next();
}
}
else {
outputBinding = String.format("%s-%s-0", this.functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX);
}
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
new BoundTargetHolder(getBindingTargetFactory(KStream.class)
.createOutput(outputBinding), true));
String outputBinding1 = outputBinding;
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> outputHolders.get(outputBinding1).getBoundTarget());
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
registry.registerBeanDefinition(outputBinding1, rootBeanDefinition1);
}
}
private boolean isAnotherFunctionOrConsumerFound(ResolvableType arg1) {
return arg1 != null && !arg1.isArray() && arg1.getRawClass() != null &&
(arg1.getRawClass().isAssignableFrom(Function.class) || arg1.getRawClass().isAssignableFrom(Consumer.class));
}
/**
* If the application provides the property spring.cloud.stream.function.inputBindings.functionName,
* that gets precedence. Otherwise, use functionName-input or functionName-input-0, functionName-input-1 and so on
* for multiple inputs.
*
* @return an ordered collection of input bindings to use
*/
private List<String> buildInputBindings() {
List<String> inputs = new ArrayList<>();
List<String> inputBindings = streamFunctionProperties.getInputBindings(this.functionName);
if (!CollectionUtils.isEmpty(inputBindings)) {
inputs.addAll(inputBindings);
return inputs;
}
int numberOfInputs = this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
int i = 0;
while (i < numberOfInputs) {
inputs.add(String.format("%s-%s-%d", this.functionName, FunctionConstants.DEFAULT_INPUT_SUFFIX, i++));
}
return inputs;
}
private int getNumberOfInputs() {
int numberOfInputs = 1;
ResolvableType arg1 = this.type.getGeneric(1);
while (isAnotherFunctionOrConsumerFound(arg1)) {
arg1 = arg1.getGeneric(1);
numberOfInputs++;
}
return numberOfInputs;
}
private void bindInput(ResolvableType arg0, String inputName) {
if (arg0.getRawClass() != null) {
KafkaStreamsBindableProxyFactory.this.inputHolders.put(inputName,
new BoundTargetHolder(getBindingTargetFactory(arg0.getRawClass())
.createInput(inputName), true));
}
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition();
rootBeanDefinition.setInstanceSupplier(() -> inputHolders.get(inputName).getBoundTarget());
registry.registerBeanDefinition(inputName, rootBeanDefinition);
}
@Override
public Set<String> getInputs() {
Set<String> ins = new LinkedHashSet<>();
this.inputHolders.forEach((s, BoundTargetHolder) -> ins.add(s));
return ins;
}
@Override
public Set<String> getOutputs() {
Set<String> outs = new LinkedHashSet<>();
this.outputHolders.forEach((s, BoundTargetHolder) -> outs.add(s));
return outs;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
public void addOutputBinding(String output, Class<?> clazz) {
KafkaStreamsBindableProxyFactory.this.outputHolders.put(output,
new BoundTargetHolder(getBindingTargetFactory(clazz)
.createOutput(output), true));
}
public String getFunctionName() {
return functionName;
}
public Map<String, BoundTargetHolder> getOutputHolders() {
return outputHolders;
}
}

View File

@@ -35,13 +35,15 @@ public class KafkaStreamsFunctionAutoConfiguration {
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessorInvoker kafkaStreamsFunctionProcessorInvoker(
KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
kafkaStreamsFunctionProcessor);
kafkaStreamsFunctionProcessor, kafkaStreamsBindableProxyFactories);
}
@Bean
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor() {
return new KafkaStreamsFunctionBeanPostProcessor();
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
return new KafkaStreamsFunctionBeanPostProcessor(streamFunctionProperties);
}
}

View File

@@ -17,18 +17,33 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.ClassUtils;
@@ -38,22 +53,53 @@ import org.springframework.util.ClassUtils;
* @since 2.2.0
*
*/
class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionBeanPostProcessor.class);
private static final String[] EXCLUDE_FUNCTIONS = new String[]{"functionRouter", "sendToDlqAndContinue"};
private ConfigurableListableBeanFactory beanFactory;
private boolean onlySingleFunction;
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
private final StreamFunctionProperties streamFunctionProperties;
public KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
this.streamFunctionProperties = streamFunctionProperties;
}
public Map<String, ResolvableType> getResolvableTypes() {
return this.resolvableTypeMap;
}
@Override
public void afterPropertiesSet() {
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
String[] biFunctionNames = this.beanFactory.getBeanNamesForType(BiFunction.class);
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
String[] biConsumerNames = this.beanFactory.getBeanNamesForType(BiConsumer.class);
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)).forEach(this::extractResolvableTypes);
final Stream<String> concat = Stream.concat(
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)),
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
final List<String> collect = concat.collect(Collectors.toList());
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
onlySingleFunction = collect.size() == 1;
collect.stream()
.forEach(this::extractResolvableTypes);
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
for (String s : getResolvableTypes().keySet()) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(getResolvableTypes().get(s));
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(s);
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
}
}
private void extractResolvableTypes(String key) {
@@ -62,12 +108,30 @@ class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFac
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
resolvableTypeMap.put(key, resolvableType);
Method[] methods = classObj.getMethods();
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
if (kafkaStreamMethod.isPresent()) {
Method method = kafkaStreamMethod.get();
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
if (onlySingleFunction) {
resolvableTypeMap.put(key, resolvableType);
}
else {
final String definition = streamFunctionProperties.getDefinition();
if (definition == null) {
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
}
else if (definition.contains(key)) {
resolvableTypeMap.put(key, resolvableType);
}
}
}
}
}
catch (NoSuchMethodException e) {
//ignore
catch (Exception e) {
LOG.error("Function activation issues while mapping the function: " + key, e);
}
}

View File

@@ -16,7 +16,9 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Map;
import java.util.Optional;
import javax.annotation.PostConstruct;
@@ -28,20 +30,26 @@ import org.springframework.core.ResolvableType;
* @author Soby Chacko
* @since 2.1.0
*/
class KafkaStreamsFunctionProcessorInvoker {
public class KafkaStreamsFunctionProcessorInvoker {
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
private final Map<String, ResolvableType> resolvableTypeMap;
private final KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories;
KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
public KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
this.resolvableTypeMap = resolvableTypeMap;
this.kafkaStreamsBindableProxyFactories = kafkaStreamsBindableProxyFactories;
}
@PostConstruct
void invoke() {
resolvableTypeMap.forEach((key, value) ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key));
resolvableTypeMap.forEach((key, value) -> {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(key)).findFirst();
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key, proxyFactory.get());
});
}
}

View File

@@ -1,43 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Type;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.function.context.WrapperDetector;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionWrapperDetector implements WrapperDetector {
@Override
public boolean isWrapper(Type type) {
if (type instanceof Class<?>) {
Class<?> cls = (Class<?>) type;
return KStream.class.isAssignableFrom(cls) ||
KTable.class.isAssignableFrom(cls) ||
GlobalKTable.class.isAssignableFrom(cls);
}
return false;
}
}

View File

@@ -1,72 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
/**
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications.
* This class provides convenient ways to access the commonly used kafka stream properties
* from the user application. For example, windowing operations are common use cases in
* stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @deprecated The properties exposed by this class can be used directly on Kafka Streams API in the application.
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
@Deprecated
public class KafkaStreamsApplicationSupportProperties {
private TimeWindow timeWindow;
public TimeWindow getTimeWindow() {
return this.timeWindow;
}
public void setTimeWindow(TimeWindow timeWindow) {
this.timeWindow = timeWindow;
}
/**
* Properties required by time windows.
*/
public static class TimeWindow {
private int length;
private int advanceBy;
public int getLength() {
return this.length;
}
public void setLength(int length) {
this.length = length;
}
public int getAdvanceBy() {
return this.advanceBy;
}
public void setAdvanceBy(int advanceBy) {
this.advanceBy = advanceBy;
}
}
}

View File

@@ -16,8 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Kafka Streams binder configuration properties.
@@ -34,7 +38,10 @@ public class KafkaStreamsBinderConfigurationProperties
/**
* Enumeration for various Serde errors.
*
* @deprecated in favor of {@link DeserializationExceptionHandler}.
*/
@Deprecated
public enum SerdeError {
/**
@@ -54,6 +61,36 @@ public class KafkaStreamsBinderConfigurationProperties
private String applicationId;
private StateStoreRetry stateStoreRetry = new StateStoreRetry();
private Map<String, Functions> functions = new HashMap<>();
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a deserialization exception. This handler will be applied against all input bindings
* unless overridden at the consumer binding.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
public Map<String, Functions> getFunctions() {
return functions;
}
public void setFunctions(Map<String, Functions> functions) {
this.functions = functions;
}
public StateStoreRetry getStateStoreRetry() {
return stateStoreRetry;
}
public void setStateStoreRetry(StateStoreRetry stateStoreRetry) {
this.stateStoreRetry = stateStoreRetry;
}
public String getApplicationId() {
return this.applicationId;
}
@@ -62,21 +99,84 @@ public class KafkaStreamsBinderConfigurationProperties
this.applicationId = applicationId;
}
/**
* {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} to use when
* there is a Serde error.
* {@link KafkaStreamsBinderConfigurationProperties.SerdeError} values are used to
* provide the exception handler on consumer binding.
*/
private KafkaStreamsBinderConfigurationProperties.SerdeError serdeError;
@Deprecated
public KafkaStreamsBinderConfigurationProperties.SerdeError getSerdeError() {
return this.serdeError;
}
@Deprecated
public void setSerdeError(
KafkaStreamsBinderConfigurationProperties.SerdeError serdeError) {
this.serdeError = serdeError;
this.serdeError = serdeError;
if (serdeError == SerdeError.logAndContinue) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndContinue;
}
else if (serdeError == SerdeError.logAndFail) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.logAndFail;
}
else if (serdeError == SerdeError.sendToDlq) {
this.deserializationExceptionHandler = DeserializationExceptionHandler.sendToDlq;
}
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public static class StateStoreRetry {
private int maxAttempts = 1;
private long backoffPeriod = 1000;
public int getMaxAttempts() {
return maxAttempts;
}
public void setMaxAttempts(int maxAttempts) {
this.maxAttempts = maxAttempts;
}
public long getBackoffPeriod() {
return backoffPeriod;
}
public void setBackoffPeriod(long backoffPeriod) {
this.backoffPeriod = backoffPeriod;
}
}
public static class Functions {
/**
* Function specific application id.
*/
private String applicationId;
/**
* Funcion specific configuraiton to use.
*/
private Map<String, String> configuration;
public String getApplicationId() {
return applicationId;
}
public void setApplicationId(String applicationId) {
this.applicationId = applicationId;
}
public Map<String, String> getConfiguration() {
return configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
}
}

View File

@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.DeserializationExceptionHandler;
/**
* Extended properties for Kafka Streams consumer.
@@ -43,6 +44,16 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
*/
private String materializedAs;
/**
* Per input binding deserialization handler.
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
/**
* {@link org.apache.kafka.streams.processor.TimestampExtractor} bean name to use for this consumer.
*/
private String timestampExtractorBeanName;
public String getApplicationId() {
return this.applicationId;
}
@@ -75,4 +86,19 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
this.materializedAs = materializedAs;
}
public String getTimestampExtractorBeanName() {
return timestampExtractorBeanName;
}
public void setTimestampExtractorBeanName(String timestampExtractorBeanName) {
this.timestampExtractorBeanName = timestampExtractorBeanName;
}
public DeserializationExceptionHandler getDeserializationExceptionHandler() {
return deserializationExceptionHandler;
}
public void setDeserializationExceptionHandler(DeserializationExceptionHandler deserializationExceptionHandler) {
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
}

View File

@@ -36,6 +36,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
*/
private String valueSerde;
/**
* {@link org.apache.kafka.streams.processor.StreamPartitioner} to be used on Kafka Streams producer.
*/
private String streamPartitionerBeanName;
public String getKeySerde() {
return this.keySerde;
}
@@ -52,4 +57,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
this.valueSerde = valueSerde;
}
public String getStreamPartitionerBeanName() {
return this.streamPartitionerBeanName;
}
public void setStreamPartitionerBeanName(String streamPartitionerBeanName) {
this.streamPartitionerBeanName = streamPartitionerBeanName;
}
}

View File

@@ -0,0 +1,245 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;
import java.util.PriorityQueue;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.kafka.support.serializer.JsonSerde;
/**
* A convenient {@link Serde} for {@link java.util.Collection} implementations.
*
* Whenever a Kafka Stream application needs to collect data into a container object like
* {@link java.util.Collection}, then this Serde class can be used as a convenience for
* serialization needs. Some examples of where using this may handy is when the application
* needs to do aggregation or reduction operations where it needs to simply hold an
* {@link Iterable} type.
*
* By default, this Serde will use {@link JsonSerde} for serializing the inner objects.
* This can be changed by providing an explicit Serde during creation of this object.
*
* Here is an example of a possible use case:
*
* <pre class="code">
* .aggregate(ArrayList::new,
* (k, v, aggregates) -> {
* aggregates.add(v);
* return aggregates;
* },
* Materialized.&lt;String, Collection&lt;Foo&gt;, WindowStore&lt;Bytes, byte[]&gt;&gt;as(
* "foo-store")
* .withKeySerde(Serdes.String())
* .withValueSerde(new CollectionSerde<>(Foo.class, ArrayList.class)))
* * </pre>
*
* Supported Collection types by this Serde are - {@link java.util.ArrayList}, {@link java.util.LinkedList},
* {@link java.util.PriorityQueue} and {@link java.util.HashSet}. Deserializer will throw an exception
* if any other Collection types are used.
*
* @param <E> type of the underlying object that the collection holds
* @author Soby Chacko
* @since 3.0.0
*/
public class CollectionSerde<E> implements Serde<Collection<E>> {
/**
* Serde used for serializing the inner object.
*/
private final Serde<Collection<E>> inner;
/**
* Type of the collection class. This has to be a class that is
* implementing the {@link java.util.Collection} interface.
*/
private final Class<?> collectionClass;
/**
* Constructor to use when the application wants to specify the type
* of the Serde used for the inner object.
*
* @param serde specify an explicit Serde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Serde<E> serde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
this.inner =
Serdes.serdeFrom(
new CollectionSerializer<>(serde.serializer()),
new CollectionDeserializer<>(serde.deserializer(), collectionsClass));
}
/**
* Constructor to delegate serialization operations for the inner objects
* to {@link JsonSerde}.
*
* @param targetTypeForJsonSerde target type used by the JsonSerde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Class<?> targetTypeForJsonSerde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
try (JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde)) {
this.inner = Serdes.serdeFrom(
new CollectionSerializer<>(jsonSerde.serializer()),
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
}
}
@Override
public Serializer<Collection<E>> serializer() {
return inner.serializer();
}
@Override
public Deserializer<Collection<E>> deserializer() {
return inner.deserializer();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
inner.serializer().configure(configs, isKey);
inner.deserializer().configure(configs, isKey);
}
@Override
public void close() {
inner.serializer().close();
inner.deserializer().close();
}
private static class CollectionSerializer<E> implements Serializer<Collection<E>> {
private Serializer<E> inner;
CollectionSerializer(Serializer<E> inner) {
this.inner = inner;
}
CollectionSerializer() { }
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, Collection<E> collection) {
final int size = collection.size();
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final DataOutputStream dos = new DataOutputStream(baos);
final Iterator<E> iterator = collection.iterator();
try {
dos.writeInt(size);
while (iterator.hasNext()) {
final byte[] bytes = inner.serialize(topic, iterator.next());
dos.writeInt(bytes.length);
dos.write(bytes);
}
}
catch (IOException e) {
throw new RuntimeException("Unable to serialize the provided collection", e);
}
return baos.toByteArray();
}
@Override
public void close() {
inner.close();
}
}
private static class CollectionDeserializer<E> implements Deserializer<Collection<E>> {
private final Deserializer<E> valueDeserializer;
private final Class<?> collectionClass;
CollectionDeserializer(final Deserializer<E> valueDeserializer, Class<?> collectionClass) {
this.valueDeserializer = valueDeserializer;
this.collectionClass = collectionClass;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public Collection<E> deserialize(String topic, byte[] bytes) {
if (bytes == null || bytes.length == 0) {
return null;
}
Collection<E> collection = getCollection();
final DataInputStream dataInputStream = new DataInputStream(new ByteArrayInputStream(bytes));
try {
final int records = dataInputStream.readInt();
for (int i = 0; i < records; i++) {
final byte[] valueBytes = new byte[dataInputStream.readInt()];
final int read = dataInputStream.read(valueBytes);
if (read != -1) {
collection.add(valueDeserializer.deserialize(topic, valueBytes));
}
}
}
catch (IOException e) {
throw new RuntimeException("Unable to deserialize collection", e);
}
return collection;
}
@Override
public void close() {
}
private Collection<E> getCollection() {
Collection<E> collection;
if (this.collectionClass.isAssignableFrom(ArrayList.class)) {
collection = new ArrayList<>();
}
else if (this.collectionClass.isAssignableFrom(HashSet.class)) {
collection = new HashSet<>();
}
else if (this.collectionClass.isAssignableFrom(LinkedList.class)) {
collection = new LinkedList<>();
}
else if (this.collectionClass.isAssignableFrom(PriorityQueue.class)) {
collection = new PriorityQueue<>();
}
else {
throw new IllegalArgumentException("Unsupported collection type - " + this.collectionClass);
}
return collection;
}
}
}

View File

@@ -16,213 +16,22 @@
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import org.springframework.messaging.converter.CompositeMessageConverter;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverterFactory}.
* This class provides the same functionality as {@link MessageConverterDelegateSerde} and is deprecated.
* It is kept for backward compatibility reasons and will be removed in version 3.1
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the
* {@link CompositeNonNativeSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 2.1
* @sine 2.1
*
* @deprecated in favour of {@link MessageConverterDelegateSerde}
*/
public class CompositeNonNativeSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final CompositeNonNativeDeserializer<T> compositeNonNativeDeserializer;
private final CompositeNonNativeSerializer<T> compositeNonNativeSerializer;
public CompositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(
compositeMessageConverterFactory);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(
compositeMessageConverterFactory);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.compositeNonNativeDeserializer.configure(configs, isKey);
this.compositeNonNativeSerializer.configure(configs, isKey);
}
@Override
public void close() {
// No-op
}
@Override
public Serializer<T> serializer() {
return this.compositeNonNativeSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.compositeNonNativeDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class CompositeNonNativeDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
CompositeNonNativeDeserializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class CompositeNonNativeSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
CompositeNonNativeSerializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
// No-op
}
@Deprecated
public class CompositeNonNativeSerde extends MessageConverterDelegateSerde {
public CompositeNonNativeSerde(CompositeMessageConverter compositeMessageConverter) {
super(compositeMessageConverter);
}
}

View File

@@ -0,0 +1,226 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverter}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
* schema registry support from Spring Cloud Stream in a Kafka Streams binder based
* application, the applications can deserialize the incoming Kafka Streams records using
* the built in Avro {@link MessageConverter}. However, this same message conversion
* approach will not work downstream in other operations in the topology for Kafka Streams
* as some of them needs a {@link Serde} instance that can talk to the Spring Cloud Stream
* provided Schema Registry. This implementation will solve that problem.
*
* Only Avro and JSON based converters are exposed as binder provided {@link Serde}
* implementations currently.
*
* Users of this class must call the
* {@link MessageConverterDelegateSerde#configure(Map, boolean)} method to configure the
* {@link Serde} object. At the very least the configuration map must include a key called
* "valueClass" to indicate the type of the target object for deserialization. If any
* other content type other than JSON is needed (only Avro is available now other than
* JSON), that needs to be included in the configuration map with the key "contentType".
* For example,
*
* <pre class="code">
* Map&lt;String, Object&gt; config = new HashMap&lt;&gt;();
* config.put("valueClass", Foo.class);
* config.put("contentType", "application/avro");
* </pre>
*
* Then use the above map when calling the configure method.
*
* This class is only intended to be used when writing a Spring Cloud Stream Kafka Streams
* application that uses Spring Cloud Stream schema registry for schema evolution.
*
* An instance of this class is provided as a bean by the binder configuration and
* typically the applications can autowire that bean. This is the expected usage pattern
* of this class.
*
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 3.0
*/
public class MessageConverterDelegateSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";
private static final String AVRO_FORMAT = "avro";
private static final MimeType DEFAULT_AVRO_MIME_TYPE = new MimeType("application",
"*+" + AVRO_FORMAT);
private final MessageConverterDelegateDeserializer<T> messageConverterDelegateDeserializer;
private final MessageConverterDelegateSerializer<T> messageConverterDelegateSerializer;
public MessageConverterDelegateSerde(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverterDelegateDeserializer = new MessageConverterDelegateDeserializer<>(
compositeMessageConverter);
this.messageConverterDelegateSerializer = new MessageConverterDelegateSerializer<>(
compositeMessageConverter);
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.messageConverterDelegateDeserializer.configure(configs, isKey);
this.messageConverterDelegateSerializer.configure(configs, isKey);
}
@Override
public void close() {
// No-op
}
@Override
public Serializer<T> serializer() {
return this.messageConverterDelegateSerializer;
}
@Override
public Deserializer<T> deserializer() {
return this.messageConverterDelegateDeserializer;
}
private static MimeType resolveMimeType(Map<String, ?> configs) {
if (configs.containsKey(MessageHeaders.CONTENT_TYPE)) {
String contentType = (String) configs.get(MessageHeaders.CONTENT_TYPE);
if (DEFAULT_AVRO_MIME_TYPE.equals(MimeTypeUtils.parseMimeType(contentType))) {
return DEFAULT_AVRO_MIME_TYPE;
}
else if (contentType.contains("avro")) {
return MimeTypeUtils.parseMimeType("application/avro");
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
else {
return new MimeType("application", "json", StandardCharsets.UTF_8);
}
}
/**
* Custom {@link Deserializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <U> parameterized target type for deserialization
*/
private static class MessageConverterDelegateDeserializer<U> implements Deserializer<U> {
private final MessageConverter messageConverter;
private MimeType mimeType;
private Class<?> valueClass;
MessageConverterDelegateDeserializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
Assert.isTrue(configs.containsKey(VALUE_CLASS_HEADER),
"Deserializers must provide a configuration for valueClass.");
final Object valueClass = configs.get(VALUE_CLASS_HEADER);
Assert.isTrue(valueClass instanceof Class,
"Deserializers must provide a valid value for valueClass.");
this.valueClass = (Class<?>) valueClass;
this.mimeType = resolveMimeType(configs);
}
@SuppressWarnings("unchecked")
@Override
public U deserialize(String topic, byte[] data) {
Message<?> message = MessageBuilder.withPayload(data)
.setHeader(MessageHeaders.CONTENT_TYPE, this.mimeType.toString())
.build();
U messageConverted = (U) this.messageConverter.fromMessage(message,
this.valueClass);
Assert.notNull(messageConverted, "Deserialization failed.");
return messageConverted;
}
@Override
public void close() {
// No-op
}
}
/**
* Custom {@link Serializer} that uses the {@link org.springframework.cloud.stream.converter.CompositeMessageConverterFactory}.
*
* @param <V> parameterized type for serialization
*/
private static class MessageConverterDelegateSerializer<V> implements Serializer<V> {
private final MessageConverter messageConverter;
private MimeType mimeType;
MessageConverterDelegateSerializer(
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
this.mimeType = resolveMimeType(configs);
}
@Override
public byte[] serialize(String topic, V data) {
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, this.mimeType.toString());
MessageHeaders messageHeaders = new MessageHeaders(headers);
final Object payload = this.messageConverter
.toMessage(message.getPayload(), messageHeaders).getPayload();
return (byte[]) payload;
}
@Override
public void close() {
// No-op
}
}
}

View File

@@ -1,9 +1,4 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration
org.springframework.cloud.function.context.WrapperDetector=\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionWrapperDetector
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.endpoint.TopologyEndpointAutoConfiguration

View File

@@ -14,8 +14,9 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
@@ -23,27 +24,32 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.mockito.Mockito;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -54,6 +60,7 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.internal.verification.VerificationModeFactory.times;
/**
* @author Soby Chacko
@@ -86,6 +93,31 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
consumer.close();
}
@Test
public void testStateStoreRetrievalRetry() {
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
kafkaStreamsRegistry.registerKafkaStreams(mock);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
InteractiveQueryService interactiveQueryService = new InteractiveQueryService(kafkaStreamsRegistry,
binderConfigurationProperties);
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
try {
interactiveQueryService.getQueryableStore("foo", storeType);
}
catch (Exception ignored) {
}
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
@@ -103,9 +135,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidateFoo(context);
}
@@ -114,8 +144,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
private void receiveAndValidateFoo(ConfigurableApplicationContext context) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
@@ -134,6 +163,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
InteractiveQueryService interactiveQueryService = context
.getBean(InteractiveQueryService.class);
HostInfo currentHostInfo = interactiveQueryService.getCurrentHostInfo();
assertThat(currentHostInfo.host() + ":" + currentHostInfo.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
@@ -145,6 +175,13 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
HostInfo hostInfoFoo = interactiveQueryService
.getHostInfo("prod-id-count-store-foo", 123, new IntegerSerializer());
assertThat(hostInfoFoo).isNull();
final List<HostInfo> hostInfos = interactiveQueryService.getAllHostsInfo("prod-id-count-store");
assertThat(hostInfos.size()).isEqualTo(1);
final HostInfo hostInfo1 = hostInfos.get(0);
assertThat(hostInfo1.host() + ":" + hostInfo1.port())
.isEqualTo(embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -153,7 +190,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
@StreamListener("input")
@SendTo("output")
@SuppressWarnings("deprecation")
public KStream<?, String> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
@@ -187,7 +223,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
}
}
static class Product {

View File

@@ -16,7 +16,9 @@
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.junit.ClassRule;
import org.junit.Test;
@@ -27,7 +29,7 @@ import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
/**
* @author Soby Chacko
@@ -35,23 +37,36 @@ import org.springframework.kafka.test.rule.KafkaEmbedded;
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
public void testKStreamBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.bindings.input.binder=kBind1",
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment"
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input-3.consumer.application-id"
+ "=testKStreamBinderWithCustomEnvironmentCanStart-foobar",
"--spring.cloud.stream.bindings.input-1.destination=foo",
"--spring.cloud.stream.bindings.input-1.binder=kstreamBinder",
"--spring.cloud.stream.binders.kstreamBinder.type=kstream",
"--spring.cloud.stream.binders.kstreamBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring"
+ ".cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input-2.destination=bar",
"--spring.cloud.stream.bindings.input-2.binder=ktableBinder",
"--spring.cloud.stream.binders.ktableBinder.type=ktable",
"--spring.cloud.stream.binders.ktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.bindings.input-3.destination=foobar",
"--spring.cloud.stream.bindings.input-3.binder=globalktableBinder",
"--spring.cloud.stream.binders.globalktableBinder.type=globalktable",
"--spring.cloud.stream.binders.globalktableBinder.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@@ -59,34 +74,58 @@ public class KafkaStreamsBinderBootstrapTest {
@Test
public void testKafkaStreamsBinderWithStandardConfigurationCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.default.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input-3.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@SpringBootApplication
@EnableBinding(StreamSourceProcessor.class)
static class SimpleApplication {
@EnableBinding({SimpleKStreamBinding.class, SimpleKTableBinding.class, SimpleGlobalKTableBinding.class})
static class SimpleKafkaStreamsApplication {
@StreamListener
public void handle(@Input("input") KStream<Object, String> stream) {
public void handle(@Input("input-1") KStream<Object, String> stream) {
}
@StreamListener
public void handleX(@Input("input-2") KTable<Object, String> stream) {
}
@StreamListener
public void handleY(@Input("input-3") GlobalKTable<Object, String> stream) {
}
}
interface StreamSourceProcessor {
interface SimpleKStreamBinding {
@Input("input")
@Input("input-1")
KStream<?, ?> inputStream();
}
interface SimpleKTableBinding {
@Input("input-2")
KTable<?, ?> inputStream();
}
interface SimpleGlobalKTableBinding {
@Input("input-3")
GlobalKTable<?, ?> inputStream();
}
}

View File

@@ -37,11 +37,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -84,25 +79,23 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.bindings.process-in-0=input",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.function.bindings.process-out-0=output1",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.function.bindings.process-out-1=output2",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.function.bindings.process-out-2=output3",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output1.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output3.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId" +
"--spring.cloud.stream.kafka.streams.binder.applicationId" +
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
@@ -182,9 +175,7 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Bean
@@ -207,18 +198,4 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}

View File

@@ -19,37 +19,43 @@ package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.processor.StreamPartitioner;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.TopologyEndpoint;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
@@ -57,20 +63,23 @@ public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
"counts", "counts-1", "counts-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private final static CountDownLatch LATCH = new CountDownLatch(1);
@BeforeClass
public static void setUp() throws Exception {
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
}
@AfterClass
@@ -83,30 +92,101 @@ public class KafkaStreamsBinderWordCountFunctionTests {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.bindings.process-out-0.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountFunction",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
//"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
receiveAndValidate("words", "counts");
final MeterRegistry meterRegistry = context.getBean(MeterRegistry.class);
Thread.sleep(100);
assertThat(meterRegistry.get("stream.metrics.commit.total").gauge().value()).isEqualTo(1.0);
assertThat(meterRegistry.get("app.info.start.time.ms").gauge().value()).isNotNaN();
Assert.isTrue(LATCH.await(5, TimeUnit.SECONDS), "Failed to call customizers");
//Testing topology endpoint
final KafkaStreamsRegistry kafkaStreamsRegistry = context.getBean(KafkaStreamsRegistry.class);
final TopologyEndpoint topologyEndpoint = new TopologyEndpoint(kafkaStreamsRegistry);
final String topology1 = topologyEndpoint.topology();
final String topology2 = topologyEndpoint.topology("testKstreamWordCountFunction");
assertThat(topology1).isNotEmpty();
assertThat(topology1).isEqualTo(topology2);
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
@Test
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
}
}
@Test
public void testKstreamWordCountFunctionWithCustomProducerStreamPartitioner() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.streamPartitionerBeanName" +
"=streamPartitioner",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-2");
template.sendDefault("foo");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"foo\",\"count\":1")).isTrue();
assertThat(cr.partition() == 0) .isTrue();
template.sendDefault("bar");
cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
assertThat(cr.value().contains("\"word\":\"bar\",\"count\":1")).isTrue();
assertThat(cr.partition() == 1) .isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic(in);
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out);
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
@@ -164,23 +244,46 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
public static class WordCountProcessorApplication {
@Autowired
InteractiveQueryService interactiveQueryService;
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean
public StreamsBuilderFactoryBeanCustomizer customizer() {
return fb -> {
try {
fb.setStateListener((newState, oldState) -> {
});
fb.getObject(); //make sure no exception is thrown at this call.
KafkaStreamsBinderWordCountFunctionTests.LATCH.countDown();
}
catch (Exception e) {
//Nothing to do - When the exception is thrown above, the latch won't be count down.
}
};
}
@Bean
public StreamPartitioner<String, WordCount> streamPartitioner() {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
@@ -33,8 +34,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -60,8 +59,8 @@ public class KafkaStreamsFunctionStateStoreTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.bindings.process-in-0.destination=words",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKafkaStreamsFuncionWithMultipleStateStores",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -97,9 +96,8 @@ public class KafkaStreamsFunctionStateStoreTests {
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
static class StateStoreTestApplication {
public static class StateStoreTestApplication {
KeyValueStore<Long, Long> state1;
WindowStore<Long, Long> state2;
@@ -107,9 +105,9 @@ public class KafkaStreamsFunctionStateStoreTests {
boolean processed;
@Bean
public java.util.function.Consumer<KStream<Object, String>> process() {
return input ->
input.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
public java.util.function.BiConsumer<KStream<Object, String>, KStream<Object, String>> process() {
return (input0, input1) ->
input0.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
@@ -145,14 +143,9 @@ public class KafkaStreamsFunctionStateStoreTests {
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
1L, 3, 3L, false), Serdes.Long(),
Duration.ofSeconds(3), Duration.ofSeconds(3), false), Serdes.Long(),
Serdes.Long());
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
}
}

View File

@@ -0,0 +1,211 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
public class MultipleFunctionsInSameAppTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"coffee", "electronics");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
private static CountDownLatch countDownLatch = new CountDownLatch(2);
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("purchase-groups", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "coffee", "electronics");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testMultiFunctionsInSameApp() throws InterruptedException {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.applicationId=analyze-id-0",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.bindings.analyze-in-0.consumer.concurrency=1",
"--spring.cloud.stream.kafka.streams.binder.functions.process.configuration.client.id=process-client",
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.configuration.client.id=analyze-client",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
Integer concurrency = (Integer) processStreamsBuilderFactoryBean.getStreamsConfiguration()
.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(2);
concurrency = (Integer) analyzeStreamsBuilderFactoryBean.getStreamsConfiguration()
.get(StreamsConfig.NUM_STREAM_THREADS_CONFIG);
assertThat(concurrency).isEqualTo(1);
}
}
@Test
public void testMultiFunctionsInSameAppWithMultiBinders() throws InterruptedException {
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process;analyze",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.startOffset=latest",
"--spring.cloud.stream.bindings.process-in-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.bindings.process-out-0.binder=kafka1",
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
"--spring.cloud.stream.bindings.process-out-1.binder=kafka1",
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
"--spring.cloud.stream.bindings.analyze-in-0.binder=kafka2",
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
"--spring.cloud.stream.bindings.analyze-in-1.binder=kafka2",
"--spring.cloud.stream.binders.kafka1.type=kstream",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-1",
"--spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=process-client",
"--spring.cloud.stream.binders.kafka2.type=kstream",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.applicationId=my-app-2",
"--spring.cloud.stream.binders.kafka2.environment.spring.cloud.stream.kafka.streams.binder.configuration.client.id=analyze-client")) {
Thread.sleep(1000);
receiveAndValidate("purchases", "coffee", "electronics");
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
assertThat(processStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-1");
assertThat(analyzeStreamsConfiguration.getProperty("application.id")).isEqualTo("my-app-2");
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
}
}
private void receiveAndValidate(String in, String... out) throws InterruptedException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic(in);
template.sendDefault("coffee");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, out[0]);
assertThat(cr.value().contains("coffee")).isTrue();
template.sendDefault("electronics");
cr = KafkaTestUtils.getSingleRecord(consumer, out[1]);
assertThat(cr.value().contains("electronics")).isTrue();
Assert.isTrue(countDownLatch.await(5, TimeUnit.SECONDS), "Analyze (BiConsumer) method didn't receive all the expected records");
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class MultipleFunctionsInSameApp {
@Bean
public Function<KStream<String, String>, KStream<String, String>[]> process() {
return input -> input.branch(
(s, p) -> p.equalsIgnoreCase("coffee"),
(s, p) -> p.equalsIgnoreCase("electronics"));
}
@Bean
public BiConsumer<KStream<String, String>, KStream<String, String>> analyze() {
return (coffee, electronics) -> {
coffee.foreach((s, p) -> countDownLatch.countDown());
electronics.foreach((s, p) -> countDownLatch.countDown());
};
}
}
}

View File

@@ -0,0 +1,121 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Date;
import java.util.function.Function;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.KeyValueSerdeResolver;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.util.Assert;
public class SerdesProvidedAsBeansTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true);
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKstreamWordCountFunction() throws NoSuchMethodException {
SpringApplication app = new SpringApplication(SerdeProvidedAsBeanApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final Method method = SerdeProvidedAsBeanApp.class.getMethod("process");
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, SerdeProvidedAsBeanApp.class);
final KeyValueSerdeResolver keyValueSerdeResolver = context.getBean(KeyValueSerdeResolver.class);
final BindingServiceProperties bindingServiceProperties = context.getBean(BindingServiceProperties.class);
final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
final ConsumerProperties consumerProperties = bindingServiceProperties.getBindingProperties("process-in-0").getConsumer();
final KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
final Serde<?> inboundValueSerde = keyValueSerdeResolver.getInboundValueSerde(consumerProperties, kafkaStreamsConsumerProperties, resolvableType.getGeneric(0));
Assert.isTrue(inboundValueSerde instanceof FooSerde, "Inbound Value Serde is not matched");
final ProducerProperties producerProperties = bindingServiceProperties.getBindingProperties("process-out-0").getProducer();
final KafkaStreamsProducerProperties kafkaStreamsProducerProperties = kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
final Serde<?> outboundValueSerde = keyValueSerdeResolver.getOutboundValueSerde(producerProperties, kafkaStreamsProducerProperties, resolvableType.getGeneric(1));
Assert.isTrue(outboundValueSerde instanceof FooSerde, "Outbound Value Serde is not matched");
}
}
static class FooSerde<T> implements Serde<T> {
@Override
public Serializer<T> serializer() {
return null;
}
@Override
public Deserializer<T> deserializer() {
return null;
}
}
@EnableAutoConfiguration
public static class SerdeProvidedAsBeanApp {
@Bean
public Function<KStream<String, Date>, KStream<String, Date>> process() {
return input -> input;
}
@Bean
public Serde<Date> fooSerde() {
return new FooSerde<>();
}
}
}

View File

@@ -32,17 +32,17 @@ import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.processor.TimestampExtractor;
import org.apache.kafka.streams.processor.WallclockTimestampExtractor;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -72,19 +72,24 @@ public class StreamToGlobalKTableFunctionTests {
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.function.bindings.process-in-0=order",
"--spring.cloud.stream.function.bindings.process-in-1=customer",
"--spring.cloud.stream.function.bindings.process-in-2=product",
"--spring.cloud.stream.function.bindings.process-out-0=enriched-order",
"--spring.cloud.stream.bindings.order.destination=orders",
"--spring.cloud.stream.bindings.customer.destination=customers",
"--spring.cloud.stream.bindings.product.destination=products",
"--spring.cloud.stream.bindings.enriched-order.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=" +
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
@@ -176,18 +181,57 @@ public class StreamToGlobalKTableFunctionTests {
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Test
public void testTimeExtractor() throws Exception {
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@Input("input-x")
GlobalKTable<?, ?> inputX();
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=forTimeExtractorTest",
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-0.destination=orders",
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-1.destination=customers",
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-2.destination=products",
"--spring.cloud.stream.bindings.forTimeExtractorTest-out-0.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-0.consumer.timestampExtractorBeanName" +
"=timestampExtractor",
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-1.consumer.timestampExtractorBeanName" +
"=timestampExtractor",
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-2.consumer.timestampExtractorBeanName" +
"=timestampExtractor",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"testTimeExtractor-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
@Input("input-y")
GlobalKTable<?, ?> inputY();
final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties =
context.getBean(KafkaStreamsExtendedBindingProperties.class);
final Map<String, KafkaStreamsBindingProperties> bindings = kafkaStreamsExtendedBindingProperties.getBindings();
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties0 = bindings.get("forTimeExtractorTest-in-0");
final String timestampExtractorBeanName0 = kafkaStreamsBindingProperties0.getConsumer().getTimestampExtractorBeanName();
final TimestampExtractor timestampExtractor0 = context.getBean(timestampExtractorBeanName0, TimestampExtractor.class);
assertThat(timestampExtractor0).isNotNull();
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties1 = bindings.get("forTimeExtractorTest-in-1");
final String timestampExtractorBeanName1 = kafkaStreamsBindingProperties1.getConsumer().getTimestampExtractorBeanName();
final TimestampExtractor timestampExtractor1 = context.getBean(timestampExtractorBeanName1, TimestampExtractor.class);
assertThat(timestampExtractor1).isNotNull();
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties2 = bindings.get("forTimeExtractorTest-in-2");
final String timestampExtractorBeanName2 = kafkaStreamsBindingProperties2.getConsumer().getTimestampExtractorBeanName();
final TimestampExtractor timestampExtractor2 = context.getBean(timestampExtractorBeanName2, TimestampExtractor.class);
assertThat(timestampExtractor2).isNotNull();
}
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@Bean
@@ -215,6 +259,20 @@ public class StreamToGlobalKTableFunctionTests {
)
);
}
@Bean
public Function<KStream<Long, Order>,
Function<KTable<Long, Customer>,
Function<GlobalKTable<Long, Product>, KStream<Long, Order>>>> forTimeExtractorTest() {
return orderStream ->
customers ->
products -> orderStream;
}
@Bean
public TimestampExtractor timestampExtractor() {
return new WallclockTimestampExtractor();
}
}
static class Order {

View File

@@ -20,6 +20,10 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
@@ -33,21 +37,16 @@ import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -56,6 +55,7 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
@@ -63,12 +63,12 @@ public class StreamToTableJoinFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1,
true, "output-topic-1", "output-topic-2");
true, "output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
public void testStreamToTable() {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -82,21 +82,112 @@ public class StreamToTableJoinFunctionTests {
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
runTest(app, consumer);
}
@Test
public void testStreamToTableBiFunction() {
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
runTest(app, consumer);
}
@Test
public void testStreamToTableBiConsumer() throws Exception {
SpringApplication app = new SpringApplication(BiConsumerApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process1",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.applicationId" +
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
"=testStreamToTableBiConsumer",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia")
);
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L)
);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
Assert.isTrue(BiConsumerApplication.latch.await(10, TimeUnit.SECONDS), "Failed to receive message");
}
finally {
consumer.close();
}
}
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
"--spring.cloud.stream.bindings.process-out-0.destination=output-topic-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
@@ -173,11 +264,11 @@ public class StreamToTableJoinFunctionTests {
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-3",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
@@ -214,22 +305,22 @@ public class StreamToTableJoinFunctionTests {
template.sendDefault(keyValue.key, keyValue.value);
}
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-2",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-2",
"--spring.cloud.stream.bindings.process-out-0.destination=output-topic-2",
"--spring.cloud.stream.bindings.process-in-0.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.process-in-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id" +
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id" +
"=StreamToTableJoinFunctionTests-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
@@ -298,8 +389,17 @@ public class StreamToTableJoinFunctionTests {
assertThat(count).isEqualTo(expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
//the following removal is a code smell. Check with Oleg to see why this is happening.
//culprit is BinderFactoryAutoConfiguration line 300 with the following code:
//if (StringUtils.hasText(name)) {
// ((StandardEnvironment) environment).getSystemProperties()
// .putIfAbsent("spring.cloud.stream.function.definition", name);
// }
context.getEnvironment().getSystemProperties()
.remove("spring.cloud.stream.function.definition");
}
finally {
consumer.close();
}
}
@@ -333,50 +433,52 @@ public class StreamToTableJoinFunctionTests {
}
@EnableBinding(KStreamKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process1() {
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process() {
return userClicksStream -> (userRegionsTable -> (userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream()));
}
}
interface KStreamKTableProcessor {
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-1")
KStream<?, ?> input1();
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-2")
KTable<?, ?> input2();
/**
* Output binding.
*
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
@EnableAutoConfiguration
public static class BiFunctionCountClicksPerRegionApplication {
@Bean
public BiFunction<KStream<String, Long>, KTable<String, String>, KStream<String, Long>> process() {
return (userClicksStream, userRegionsTable) -> (userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream());
}
}
@EnableAutoConfiguration
public static class BiConsumerApplication {
static CountDownLatch latch = new CountDownLatch(2);
@Bean
public BiConsumer<KStream<String, Long>, KTable<String, String>> process() {
return (userClicksStream, userRegionsTable) -> {
userClicksStream.foreach((key, value) -> latch.countDown());
userRegionsTable.toStream().foreach((key, value) -> latch.countDown());
};
}
}
}

View File

@@ -31,18 +31,18 @@ import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.mock.mockito.SpyBean;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -66,11 +66,16 @@ import static org.mockito.Mockito.verify;
@RunWith(SpringRunner.class)
@ContextConfiguration
@DirtiesContext
@Ignore
public abstract class DeserializationErrorHandlerByKafkaTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "error.words.group", "error.word1.groupx", "error.word2.groupx");
"DeserializationErrorHandlerByKafkaTests-In",
"DeserializationErrorHandlerByKafkaTests-out",
"error.DeserializationErrorHandlerByKafkaTests-In.group",
"error.word1.groupx",
"error.word2.groupx");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -81,11 +86,9 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
public static void setUp() {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
@@ -96,34 +99,34 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "DeserializationErrorHandlerByKafkaTests-out");
}
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTests
extends DeserializationErrorHandlerByKafkaTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
template.setDefaultTopic("DeserializationErrorHandlerByKafkaTests-In");
template.sendDefault(1, null, "foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
@@ -131,11 +134,51 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.words.group");
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.DeserializationErrorHandlerByKafkaTests-In.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.words.group");
assertThat(cr.value().equals("foobar")).isTrue();
"error.DeserializationErrorHandlerByKafkaTests-In.group");
assertThat(cr.value()).isEqualTo("foobar");
assertThat(cr.partition()).isEqualTo(0); // custom partition function
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
any(KStream.class));
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByKafkaAndDlqPerBindingTests
extends DeserializationErrorHandlerByKafkaTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("DeserializationErrorHandlerByKafkaTests-In");
template.sendDefault(1, null, "foobar");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1, "error.DeserializationErrorHandlerByKafkaTests-In.group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.DeserializationErrorHandlerByKafkaTests-In.group");
assertThat(cr.value()).isEqualTo("foobar");
assertThat(cr.partition()).isEqualTo(0); // custom partition function
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
@@ -145,11 +188,9 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
}
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=word1,word2",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq-multi-input",
//"spring.cloud.stream.kafka.streams.default.consumer.applicationId=deser-kafka-dlq-multi-input",
"spring.cloud.stream.bindings.input.group=groupx",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
@@ -159,7 +200,6 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
extends DeserializationErrorHandlerByKafkaTests {
@Test
@SuppressWarnings("unchecked")
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
@@ -180,14 +220,12 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.word1.groupx",
"error.word2.groupx");
// TODO: Investigate why the ordering matters below: i.e.
// if we consume from error.word1.groupx first, an exception is thrown.
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word2.groupx");
assertThat(cr1.value().equals("foobar")).isTrue();
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word1.groupx");
assertThat(cr2.value().equals("foobar")).isTrue();
assertThat(cr1.value()).isEqualTo("foobar");
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
"error.word2.groupx");
assertThat(cr2.value()).isEqualTo("foobar");
// Ensuring that the deserialization was indeed done by Kafka natively
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
@@ -200,12 +238,8 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@PropertySource("classpath:/org/springframework/cloud/stream/binder/kstream/integTest-1.properties")
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, String> input) {
@@ -215,11 +249,16 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts-x"))
.windowedBy(TimeWindows.of(5000)).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, rec, ex) -> 0;
}
}
}

View File

@@ -22,9 +22,9 @@ import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -64,6 +64,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"foos",
"counts-id", "error.foos.foobar-group", "error.foos1.fooz-group",
"error.foos2.fooz-group");
@@ -79,16 +80,11 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
public static void setUp() throws Exception {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foob", "false",
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("kafka-streams-dlq-tests", "false",
embeddedKafka);
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
@@ -99,6 +95,9 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
@SpringBootTest(properties = {
@@ -108,27 +107,25 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
// "spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
// + "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqPartitions=1",
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqTests
extends DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("hello");
template.sendDefault(1, 7, "hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
@@ -141,13 +138,60 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos.foobar-group");
assertThat(cr.value().equals("hello")).isTrue();
assertThat(cr.value()).isEqualTo("hello");
assertThat(cr.partition()).isEqualTo(0);
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=foos",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.deserializationExceptionHandler=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqPartitions=1",
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class DeserializationByBinderAndDlqSetOnConsumerBindingTests
extends DeserializtionErrorHandlerByBinderTests {
@Test
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault(1, 7, "hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
Consumer<String, String> consumer1 = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer1,
"error.foos.foobar-group");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
"error.foos.foobar-group");
assertThat(cr.value()).isEqualTo("hello");
assertThat(cr.partition()).isEqualTo(0);
// Ensuring that the deserialization was indeed done by the binder
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
any(KStream.class));
}
}
@SpringBootTest(properties = {
@@ -160,8 +204,6 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
// "spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
// + "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTestsWithMultipleInputs",
@@ -171,7 +213,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
@@ -215,7 +257,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store-x")).toStream()

View File

@@ -0,0 +1,112 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Michael Stoettinger
*/
public class KafkaStreamsBinderDestinationIsPatternTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"in.1", "in.2", "out");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static org.apache.kafka.clients.consumer.Consumer<Integer, String> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "true",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "out");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void test() {
SpringApplication app = new SpringApplication(ConsumingApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.cloud.stream.bindings.process-out-0.destination=out",
"--spring.cloud.stream.bindings.process-in-0.destination=in.*",
"--spring.cloud.stream.bindings.process-in-0.consumer.use-native-decoding=false",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.destinationIsPattern=true",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> producerFactory = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(producerFactory, true);
// send message to both topics that fit the pattern
template.send("in.1", "foo1");
assertThat(KafkaTestUtils.getSingleRecord(consumer, "out").value())
.isEqualTo("foo1");
template.send("in.2", "foo2");
assertThat(KafkaTestUtils.getSingleRecord(consumer, "out").value())
.isEqualTo("foo2");
}
finally {
context.close();
}
}
@EnableAutoConfiguration
public static class ConsumingApplication {
@Bean
public Function<KStream<Integer, String>, KStream<Integer, String>> process() {
return input -> input;
}
}
}

View File

@@ -34,17 +34,16 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.actuate.health.CompositeHealthContributor;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.boot.actuate.health.Status;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -78,7 +77,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
@Test
public void healthIndicatorUpTest() throws Exception {
try (ConfigurableApplicationContext context = singleStream()) {
try (ConfigurableApplicationContext context = singleStream("ApplicationHealthTest-xyz")) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in", "{\"id\":\"123\"}")),
@@ -86,9 +85,22 @@ public class KafkaStreamsBinderHealthIndicatorTests {
}
}
@Test
public void healthIndicatorUpMultipleCallsTest() throws Exception {
try (ConfigurableApplicationContext context = singleStream("ApplicationHealthTest-xyz")) {
int callsToPerform = 5;
for (int i = 0; i < callsToPerform; i++) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in", "{\"id\":\"123\"}")),
Status.UP, "out");
}
}
}
@Test
public void healthIndicatorDownTest() throws Exception {
try (ConfigurableApplicationContext context = singleStream()) {
try (ConfigurableApplicationContext context = singleStream("ApplicationHealthTest-xyzabc")) {
receive(context,
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
new ProducerRecord<>("in", "{\"id\":\"124\"}")),
@@ -116,23 +128,14 @@ public class KafkaStreamsBinderHealthIndicatorTests {
}
}
private static Status getStatusKStream(Map<String, Object> details) {
Health health = (Health) details.get("kstream");
return health != null ? health.getStatus() : Status.DOWN;
}
private static boolean waitFor(Map<String, Object> details) {
Health health = (Health) details.get("kstream");
if (health.getStatus() == Status.UP) {
Map<String, Object> moreDetails = health.getDetails();
Health kStreamHealth = (Health) moreDetails
.get("kafkaStreamsBinderHealthIndicator");
String status = (String) kStreamHealth.getDetails().get("threadState");
return status != null
&& (status.equalsIgnoreCase(KafkaStreams.State.REBALANCING.name())
|| status.equalsIgnoreCase("PARTITIONS_REVOKED")
|| status.equalsIgnoreCase("PARTITIONS_ASSIGNED")
|| status.equalsIgnoreCase(
private static boolean waitFor(Status status, Map<String, Object> details) {
if (status == Status.UP) {
String threadState = (String) details.get("threadState");
return threadState != null
&& (threadState.equalsIgnoreCase(KafkaStreams.State.REBALANCING.name())
|| threadState.equalsIgnoreCase("PARTITIONS_REVOKED")
|| threadState.equalsIgnoreCase("PARTITIONS_ASSIGNED")
|| threadState.equalsIgnoreCase(
KafkaStreams.State.PENDING_SHUTDOWN.name()));
}
return false;
@@ -185,18 +188,18 @@ public class KafkaStreamsBinderHealthIndicatorTests {
private static void checkHealth(ConfigurableApplicationContext context,
Status expected) throws InterruptedException {
HealthIndicator healthIndicator = context.getBean("bindersHealthIndicator",
HealthIndicator.class);
Health health = healthIndicator.health();
while (waitFor(health.getDetails())) {
CompositeHealthContributor healthIndicator = context
.getBean("bindersHealthContributor", CompositeHealthContributor.class);
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator = (KafkaStreamsBinderHealthIndicator) healthIndicator.getContributor("kstream");
Health health = kafkaStreamsBinderHealthIndicator.health();
while (waitFor(health.getStatus(), health.getDetails())) {
TimeUnit.SECONDS.sleep(2);
health = healthIndicator.health();
health = kafkaStreamsBinderHealthIndicator.health();
}
assertThat(health.getStatus()).isEqualTo(expected);
assertThat(getStatusKStream(health.getDetails())).isEqualTo(expected);
}
private ConfigurableApplicationContext singleStream() {
private ConfigurableApplicationContext singleStream(String applicationId) {
SpringApplication app = new SpringApplication(KStreamApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
return app.run("--server.port=0", "--spring.jmx.enabled=false",
@@ -207,20 +210,10 @@ public class KafkaStreamsBinderHealthIndicatorTests {
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
+ applicationId,
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
}
private ConfigurableApplicationContext multipleStream() {
@@ -237,37 +230,16 @@ public class KafkaStreamsBinderHealthIndicatorTests {
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
//
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
"--spring.cloud.stream.kafka.streams.bindings.input2.consumer.applicationId="
+ "ApplicationHealthTest2-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class KStreamApplication {
@StreamListener("input")
@@ -285,7 +257,6 @@ public class KafkaStreamsBinderHealthIndicatorTests {
@EnableBinding({ KafkaStreamsProcessor.class, KafkaStreamsProcessorX.class })
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class AnotherKStreamApplication {
@StreamListener("input")

View File

@@ -37,12 +37,10 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -113,18 +111,16 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountProcessorApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
receiveAndValidate();
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
private void receiveAndValidate()
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
@@ -150,7 +146,6 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@StreamListener

View File

@@ -69,8 +69,6 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id",
"false", embeddedKafka);
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put("value.deserializer", LongDeserializer.class);
DefaultKafkaConsumerFactory<Integer, Long> cf = new DefaultKafkaConsumerFactory<>(
@@ -100,19 +98,16 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidateFoo(context);
receiveAndValidateFoo();
}
finally {
context.close();
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
private void receiveAndValidateFoo() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);

View File

@@ -40,16 +40,13 @@ import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
@@ -74,7 +71,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
"counts", "counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -82,14 +79,14 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
}
@AfterClass
@@ -108,7 +105,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -116,11 +113,9 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
}
}
@@ -133,9 +128,9 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=basic-word-count",
"--spring.cloud.stream.bindings.input.destination=words-1",
"--spring.cloud.stream.bindings.output.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -146,10 +141,8 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
@@ -176,17 +169,16 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic(in);
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
out);
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
@@ -200,7 +192,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic("words-1");
template.sendDefault(null);
ConsumerRecords<String, String> received = consumer
.poll(Duration.ofMillis(5000));
@@ -216,12 +208,8 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(
@@ -232,7 +220,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts"))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
@@ -34,16 +35,12 @@ import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.mock.mockito.SpyBean;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -54,6 +51,7 @@ import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.StopWatch;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
@@ -70,7 +68,7 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
"decode-counts", "decode-counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -84,9 +82,6 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
public static void setUp() {
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.zkNodes",
embeddedKafka.getZookeeperConnectionString());
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
@@ -96,16 +91,20 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "decode-counts", "decode-counts-1");
}
@AfterClass
public static void tearDown() {
consumer.close();
System.clearProperty("spring.cloud.stream.kafka.streams.binder.brokers");
System.clearProperty("server.port");
System.clearProperty("spring.jmx.enabled");
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.destination=decode-words-1",
"spring.cloud.stream.bindings.output.destination=decode-counts-1",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-abc" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class NativeEncodingDecodingEnabledTests
@@ -117,10 +116,10 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic("decode-words-1");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
"decode-counts-1");
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(conversionDelegate, never()).serializeOnOutbound(any(KStream.class));
@@ -130,26 +129,31 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
}
// @checkstyle:off
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.bindings.input.destination=decode-words",
"spring.cloud.stream.bindings.output.destination=decode-counts",
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-xyz" })
// @checkstyle:on
"spring.cloud.stream.kafka.streams.bindings.input3.consumer.applicationId"
+ "=hello-NativeEncodingDecodingEnabledTests-xyz" })
public static class NativeEncodingDecodingDisabledTests
extends KafkaStreamsNativeEncodingDecodingTests {
@Test
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.setDefaultTopic("decode-words");
template.sendDefault("foobar");
StopWatch stopWatch = new StopWatch();
stopWatch.start();
System.out.println("Starting: ");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
"decode-counts");
stopWatch.stop();
System.out.println("Total time: " + stopWatch.getTotalTimeSeconds());
assertThat(cr.value().equals("Count for foobar : 1")).isTrue();
verify(conversionDelegate).serializeOnOutbound(any(KStream.class));
@@ -161,13 +165,8 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@PropertySource("classpath:/org/springframework/cloud/stream/binder/kstream/integTest-1.properties")
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, String> input) {
@@ -177,7 +176,7 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts-x"))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}

View File

@@ -18,9 +18,12 @@ package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.apache.kafka.streams.state.WindowStore;
import org.junit.ClassRule;
import org.junit.Test;
@@ -34,12 +37,14 @@ import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsStateStoreProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static junit.framework.TestCase.fail;
import static org.assertj.core.api.Assertions.assertThat;
/**
@@ -70,12 +75,10 @@ public class KafkaStreamsStateStoreIntegrationTests {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
Thread.sleep(2000);
receiveAndValidateFoo(context);
receiveAndValidateFoo(context, ProductCountApplication.class);
}
catch (Exception e) {
throw e;
@@ -86,29 +89,51 @@ public class KafkaStreamsStateStoreIntegrationTests {
}
@Test
public void testSameStateStoreIsCreatedOnlyOnceWhenMultipleInputBindingsArePresent() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplicationWithMultipleInputBindings.class);
public void testKstreamStateStoreBuilderBeansDefinedInApplication() throws Exception {
SpringApplication app = new SpringApplication(StateStoreBeanApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=foobar",
"--spring.cloud.stream.bindings.input3.destination=foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input3.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-xyzabc-123",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
Thread.sleep(2000);
receiveAndValidateFoo(context, StateStoreBeanApplication.class);
}
catch (Exception e) {
throw e;
}
finally {
context.close();
}
}
@Test
public void testSameStateStoreIsCreatedOnlyOnceWhenMultipleInputBindingsArePresent() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplicationWithMultipleInputBindings.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input1.destination=foobar",
"--spring.cloud.stream.bindings.input2.destination=hello-foobar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input1.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input1.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsStateStoreIntegrationTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsStateStoreIntegrationTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input1.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
Thread.sleep(2000);
// We are not particularly interested in querying the state store here, as that is verified by the other test
@@ -125,7 +150,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
private void receiveAndValidateFoo(ConfigurableApplicationContext context, Class<?> clazz)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
@@ -136,13 +161,28 @@ public class KafkaStreamsStateStoreIntegrationTests {
Thread.sleep(1000);
// assertions
ProductCountApplication productCount = context
.getBean(ProductCountApplication.class);
WindowStore<Object, String> state = productCount.state;
assertThat(state != null).isTrue();
assertThat(state.name()).isEqualTo("mystate");
assertThat(state.persistent()).isTrue();
assertThat(productCount.processed).isTrue();
if (clazz.isAssignableFrom(ProductCountApplication.class)) {
ProductCountApplication productCount = context
.getBean(ProductCountApplication.class);
WindowStore<Object, String> state = productCount.state;
assertThat(state != null).isTrue();
assertThat(state.name()).isEqualTo("mystate");
assertThat(state.persistent()).isTrue();
assertThat(productCount.processed).isTrue();
}
else if (clazz.isAssignableFrom(StateStoreBeanApplication.class)) {
StateStoreBeanApplication productCount = context
.getBean(StateStoreBeanApplication.class);
WindowStore<Object, String> state = productCount.state;
assertThat(state != null).isTrue();
assertThat(state.name()).isEqualTo("mystate");
assertThat(state.persistent()).isTrue();
assertThat(productCount.processed).isTrue();
}
else {
fail("Expected assertiond did not happen");
}
}
@EnableBinding(KafkaStreamsProcessorX.class)
@@ -154,7 +194,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener("input")
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(KStream<Object, Product> input) {
@@ -180,6 +220,49 @@ public class KafkaStreamsStateStoreIntegrationTests {
}
}
@EnableBinding(KafkaStreamsProcessorZ.class)
@EnableAutoConfiguration
public static class StateStoreBeanApplication {
WindowStore<Object, String> state;
boolean processed;
@StreamListener("input3")
@SuppressWarnings({"unchecked" })
public void process(KStream<Object, Product> input) {
input.process(() -> new Processor<Object, Product>() {
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore) processorContext.getStateStore("mystate");
}
@Override
public void process(Object s, Product product) {
processed = true;
}
@Override
public void close() {
if (state != null) {
state.close();
}
}
}, "mystate");
}
@Bean
public StoreBuilder mystore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("mystate",
3L, 3, 3L, false), Serdes.String(),
Serdes.String());
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class ProductCountApplicationWithMultipleInputBindings {
@@ -189,7 +272,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(@Input("input1")KStream<Object, Product> input, @Input("input2")KStream<Object, Product> input2) {
@@ -246,4 +329,10 @@ public class KafkaStreamsStateStoreIntegrationTests {
@Input("input2")
KStream<?, ?> input2();
}
interface KafkaStreamsProcessorZ {
@Input("input3")
KStream<?, ?> input3();
}
}

View File

@@ -99,11 +99,9 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidateFoo(context);
receiveAndValidateFoo();
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-ProductCountApplication-process", StreamsBuilderFactoryBean.class);
@@ -117,8 +115,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
}
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
throws Exception {
private void receiveAndValidateFoo() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);

View File

@@ -23,11 +23,9 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
@@ -36,7 +34,7 @@ import org.springframework.stereotype.Component;
import static org.assertj.core.api.Assertions.assertThat;
public class MultiProcessorsWithSameNameTests {
public class MultiProcessorsWithSameNameAndBindingTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
@@ -46,23 +44,19 @@ public class MultiProcessorsWithSameNameTests {
.getEmbeddedKafka();
@Test
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesArePresent() {
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesAndBindingsPresent() {
SpringApplication app = new SpringApplication(
MultiProcessorsWithSameNameTests.WordCountProcessorApplication.class);
MultiProcessorsWithSameNameAndBindingTests.WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.input-2.destination=words",
"--spring.cloud.stream.bindings.input-1.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
+ embeddedKafka.getBrokersAsString())) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean1 = context
.getBean("&stream-builder-Foo-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean1).isNotNull();
@@ -74,7 +68,6 @@ public class MultiProcessorsWithSameNameTests {
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@Component
@@ -88,7 +81,7 @@ public class MultiProcessorsWithSameNameTests {
@Component
static class Bar {
@StreamListener
public void process(@Input("input-2") KStream<Object, String> input) {
public void process(@Input("input-1") KStream<Object, String> input) {
}
}
}
@@ -98,8 +91,5 @@ public class MultiProcessorsWithSameNameTests {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
KStream<?, ?> input2();
}
}

View File

@@ -0,0 +1,146 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class OutboundValueNullSkippedConversionTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(
OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class OutboundNullApplication {
@StreamListener
@SendTo("output")
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
@Input("input") KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -32,18 +32,19 @@ import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.annotation.StreamMessageConverter;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.integration.utils.TestAvroSerializer;
import org.springframework.cloud.stream.schema.avro.AvroSchemaMessageConverter;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -60,6 +61,7 @@ import org.springframework.util.MimeTypeUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
@@ -96,6 +98,7 @@ public class PerRecordAvroContentTypeTests {
}
@Test
@Ignore
public void testPerRecordAvroConentTypeAndVerifySerialization() throws Exception {
SpringApplication app = new SpringApplication(SensorCountAvroApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -174,9 +177,8 @@ public class PerRecordAvroContentTypeTests {
}
@Bean
@StreamMessageConverter
public MessageConverter sensorMessageConverter() throws IOException {
return new AvroSchemaMessageConverter();
return new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
}
}

View File

@@ -37,7 +37,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
@@ -47,7 +46,6 @@ import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
@@ -98,9 +96,7 @@ public class StreamToGlobalKTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
@@ -256,7 +252,6 @@ public class StreamToGlobalKTableJoinIntegrationTests {
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@StreamListener

View File

@@ -20,6 +20,7 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -32,6 +33,7 @@ import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
@@ -42,7 +44,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
@@ -52,9 +53,11 @@ import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -72,7 +75,7 @@ public class StreamToTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"output-topic-1", "output-topic-2");
"output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -109,10 +112,9 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
@@ -128,6 +130,16 @@ public class StreamToTableJoinIntegrationTests {
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("output");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
@@ -261,9 +273,7 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
+ embeddedKafka.getBrokersAsString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
@@ -348,9 +358,22 @@ public class StreamToTableJoinIntegrationTests {
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@StreamListener
@@ -367,10 +390,16 @@ public class StreamToTableJoinIntegrationTests {
.map((user, regionWithClicks) -> new KeyValue<>(
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.reduce(Long::sum)
.toStream();
}
//This forces the state stores to be cleaned up before running the test.
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@@ -427,4 +456,37 @@ public class StreamToTableJoinIntegrationTests {
}
interface BindingsForTwoKStreamJoinTest {
String INPUT_1 = "input_1";
String INPUT_2 = "input_2";
@Input(INPUT_1)
KStream<String, String> input_1();
@Input(INPUT_2)
KStream<String, String> input_2();
}
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
@EnableAutoConfiguration
public static class JoinProcessor {
@StreamListener
public void testProcessor(
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
Joined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -33,16 +34,13 @@ import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
@@ -108,9 +106,7 @@ public class WordCountMultipleBranchesIntegrationTests {
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
@@ -145,12 +141,8 @@ public class WordCountMultipleBranchesIntegrationTests {
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo({ "output1", "output2", "output3" })
@SuppressWarnings("unchecked")
@@ -163,7 +155,7 @@ public class WordCountMultipleBranchesIntegrationTests {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value).windowedBy(timeWindows)
.groupBy((key, value) -> value).windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-multi")).toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,

View File

@@ -21,7 +21,8 @@ import java.util.Map;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.stream.schema.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
@@ -44,7 +45,7 @@ public class TestAvroSerializer<S> implements Serializer<S> {
@Override
public byte[] serialize(String topic, S data) {
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
Message<?> message = MessageBuilder.withPayload(data).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
headers.put(MessageHeaders.CONTENT_TYPE, "application/avro");

View File

@@ -0,0 +1,89 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import org.junit.Test;
import static org.assertj.core.api.Assertions.assertThat;
/**
*
* @author Soby Chacko
*/
public class CollectionSerdeTest {
@Test
public void testCollectionsSerde() {
Foo foo1 = new Foo();
foo1.setData("data-1");
foo1.setNum(1);
Foo foo2 = new Foo();
foo2.setData("data-2");
foo2.setNum(2);
List<Foo> foos = new ArrayList<>();
foos.add(foo1);
foos.add(foo2);
CollectionSerde<Foo> collectionSerde = new CollectionSerde<>(Foo.class, ArrayList.class);
byte[] serialized = collectionSerde.serializer().serialize("", foos);
Collection<Foo> deserialized = collectionSerde.deserializer().deserialize("", serialized);
Iterator<Foo> iterator = deserialized.iterator();
Foo foo1Retrieved = iterator.next();
assertThat(foo1Retrieved.getData()).isEqualTo("data-1");
assertThat(foo1Retrieved.getNum()).isEqualTo(1);
Foo foo2Retrieved = iterator.next();
assertThat(foo2Retrieved.getData()).isEqualTo("data-2");
assertThat(foo2Retrieved.getNum()).isEqualTo(2);
}
static class Foo {
private int num;
private String data;
Foo() {
}
public int getNum() {
return num;
}
public void setNum(int num) {
this.num = num;
}
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
}
}

View File

@@ -25,23 +25,26 @@ import java.util.UUID;
import com.example.Sensor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.cloud.stream.schema.avro.AvroSchemaMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Refer {@link CompositeNonNativeSerde} for motivations.
* Refer {@link MessageConverterDelegateSerde} for motivations.
*
* @author Soby Chacko
*/
public class CompositeNonNativeSerdeTest {
public class MessageConverterDelegateSerdeTest {
@Test
@SuppressWarnings("unchecked")
@Ignore
public void testCompositeNonNativeSerdeUsingAvroContentType() {
Random random = new Random();
Sensor sensor = new Sensor();
@@ -51,20 +54,20 @@ public class CompositeNonNativeSerdeTest {
sensor.setTemperature(random.nextFloat() * 50);
List<MessageConverter> messageConverters = new ArrayList<>();
messageConverters.add(new AvroSchemaMessageConverter());
messageConverters.add(new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl()));
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
messageConverters, new ObjectMapper());
CompositeNonNativeSerde compositeNonNativeSerde = new CompositeNonNativeSerde(
compositeMessageConverterFactory);
MessageConverterDelegateSerde messageConverterDelegateSerde = new MessageConverterDelegateSerde(
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
Map<String, Object> configs = new HashMap<>();
configs.put("valueClass", Sensor.class);
configs.put("contentType", "application/avro");
compositeNonNativeSerde.configure(configs, false);
final byte[] serialized = compositeNonNativeSerde.serializer().serialize(null,
messageConverterDelegateSerde.configure(configs, false);
final byte[] serialized = messageConverterDelegateSerde.serializer().serialize(null,
sensor);
final Object deserialized = compositeNonNativeSerde.deserializer()
final Object deserialized = messageConverterDelegateSerde.deserializer()
.deserialize(null, serialized);
assertThat(deserialized).isEqualTo(sensor);

View File

@@ -4,6 +4,7 @@
<pattern>%d{ISO8601} %5p %t %c{2}:%L - %m%n</pattern>
</encoder>
</appender>
<!-- <logger name="org.apache.kafka" level="DEBUG"/>-->
<logger name="org.springframework.integration.kafka" level="INFO"/>
<logger name="org.springframework.kafka" level="INFO"/>
<logger name="org.springframework.cloud.stream" level="INFO" />

View File

@@ -1,8 +1,6 @@
spring.cloud.stream.bindings.input.destination=words
spring.cloud.stream.bindings.output.destination=counts
spring.cloud.stream.bindings.input.destination=DeserializationErrorHandlerByKafkaTests-In
spring.cloud.stream.bindings.output.destination=DeserializationErrorHandlerByKafkaTests-Out
spring.cloud.stream.bindings.output.contentType=application/json
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.timeWindow.length=5000
spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0

Some files were not shown because too many files have changed in this diff Show More