Compare commits

...

62 Commits

Author SHA1 Message Date
buildmaster
fc4358ba10 Update SNAPSHOT to 3.2.1 2021-12-01 16:55:37 +00:00
buildmaster
f3d2287b70 Bumping versions to 3.2.1-SNAPSHOT after release 2021-12-01 13:16:23 +00:00
buildmaster
220ae98bcc Going back to snapshots 2021-12-01 13:16:23 +00:00
buildmaster
bd3eebd897 Update SNAPSHOT to 3.2.0 2021-12-01 13:14:14 +00:00
Soby Chacko
ed8683dcc2 KafkaStreams binder health check improvements
Allow health checks on KafkaStreams processors that are currently stopped through
actuator bindings endpoint. Add this only as an opt-in feature through a new binder
level property - includeStoppedProcessorsForHealthCheck which is false by default
to preserve the current health indicator behavior.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
Resolves #1175
2021-12-01 10:51:39 +01:00
Soby Chacko
60b6604988 New tips-tricks-recipes section in docs
Migrate the recipe section in Spring Cloud Stream Samples
repository as Tips, Tricks and Receipes in Kafka binder main docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1173
Resolves #1174
2021-12-01 10:36:31 +01:00
Oleg Zhurakousky
a3e76282b4 Merge pull request #1172 from sobychacko/fix-partitioning-interceptor
Fix PartitioningInterceptor CCE
2021-11-29 18:12:59 +01:00
Soby Chacko
c9687189b7 Fix PartitioningInterceptor CCE
The newly added DefaultPartitioningInteceptor must be explicitly
checked in order to avoid a CCE.

Related to resolving https://github.com/spring-cloud/spring-cloud-stream/issues/2245

Specifically for this: https://github.com/spring-cloud/spring-cloud-stream/issues/2245#issuecomment-977663452
2021-11-24 13:16:00 -05:00
Soby Chacko
5fcdf28776 GH-1170: Schema registry certificates
Move classpath: resources provided as schema registry certificates
into a local file system location.

Adding test and docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1170
2021-11-23 18:23:55 -05:00
Soby Chacko
d334359cd4 Update spring-kafka to 2.8.0 Release 2021-11-23 10:22:18 -05:00
Pommerening, Nico
aff0dc00ef GH-1161: InteractiveQueryService improvements
This PR safe guards state store instances in case there are multiple KafkaStreams
instances present that have distinct application IDs but share State Store Names.

Change is backwards compatible: In case no KafkaStreams association of the thread
can be found, all local state stores are queried as before.

In case an associated KafkaStreams Instance is found, but required StateStore is
not found in this instance, a warning is issued but backwards compatibility is
preserved by looking up all state stores.

Store within KafkaStreams instance of thread is preferred over "foreign" store with same name.

Warning is issued if requested store is not found within KafkaStreams instance of thread.

The main benefit here is to get rid of randomly selecting stores across all KafkaStreams instances
in case a store is contained within multiple streams instances with same name.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1161
2021-11-18 14:08:22 -05:00
Oleg Zhurakousky
7840decc86 Changes related to GH-2245 from core 2021-11-16 14:43:43 +01:00
buildmaster
d37cbc79d7 Going back to snapshots 2021-11-03 09:32:29 +00:00
buildmaster
7a03eeed02 Update SNAPSHOT to 3.2.0-RC1 2021-11-03 09:31:02 +00:00
Oleg Zhurakousky
3a88839a5f Disable 'testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest' temporary 2021-11-03 10:17:03 +01:00
Soby Chacko
c9a07729dd Version updates and compile fixes
Spring Kafka: 2.8.0-RC1
Spring Integration: 5.5.5
Kafka Clients: 3.0.0

Remove schema registry references.
Updates for removed classes/deprecations in Kafka Streams client.
2021-11-01 22:11:06 -04:00
Soby Chacko
07f10f6eb5 Cleaning up
* Disconnect spring-cloud-scheam-registry-client from Kafka Streams binder
  that is used for testing
* Deprecate MessageConverterDelegateSerde
* Remove MessageConverterDelegateSerdeTest
2021-10-28 19:46:46 -04:00
Gary Russell
6fdc663349 GH-1031: Retry/DLQ in Binder or Container
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1031

Provide a mechanism to provide retry/dlq configuration when adding a custom error handler,
effectively moving this functionality from the binder to the container.

Address PR comments; fix test to use embedded broker.
2021-10-25 15:10:21 -04:00
buildmaster
f3a954fad7 Going back to snapshots 2021-10-19 09:29:32 +00:00
buildmaster
7be0f1be23 Update SNAPSHOT to 3.2.0-M3 2021-10-19 09:28:11 +00:00
Oleg Zhurakousky
0b687ad0ab Update SK to 2.8.0-RC1 2021-10-19 11:16:55 +02:00
Soby Chacko
c0bece64bd README cleanup
The overview doc gets unncessarily copied to the GitHub
repository root README. This only needs to reside in the
reference manual docs. Cleaning the process that generates
the root README for the repository.
2021-10-12 17:20:33 -04:00
Gary Russell
2efd29fb27 GH-1138: HealthIndicator Improvements
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1138

Don't report DOWN if a container is stopped normally.
This is a valid state when containers are not auto-startup or are stopped while
the app remains running.

Containers are stopped abnormally when
- a listener throws an `Error`
- a `CommonContainerStoppingErrorHandler` (or similar) is configured to stop the
  container after an error.
2021-10-04 17:13:01 -04:00
Soby Chacko
82a3306cb9 GH-1157: Issues with Kafka Streams and Kotlin
Kafka Streams binder erroneously tries to parse regular
non Kafka streams Kotlin function registrations. Ignore
function beans ending in _registration in Kafka Streams binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1157
2021-10-01 11:09:58 -04:00
buildmaster
af0485e241 Going back to snapshots 2021-10-01 14:13:49 +00:00
buildmaster
ea8912b011 Update SNAPSHOT to 3.2.0-M2 2021-10-01 14:12:23 +00:00
Oleg Zhurakousky
d76d916970 Upgrade spring-kafka 2021-10-01 13:48:08 +02:00
Soby Chacko
ac0e462ed2 Doc clarification for Kafka Streams binder prefix
Polishing the docs

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1134
2021-09-28 18:07:52 -04:00
Soby Chacko
bd1b49222c GH-1156: Kafka Streams binder composition issues
When both regular Kafka and Kafka Streams functions are present,
the code that was added recently for function composition in
Kafka Streams binder was accidentally creating a binadable proxy
factory bean for non Kafka Streams functions. Resolving this issue.

Resovles https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1156
2021-09-28 15:43:06 -04:00
Soby Chacko
9fd16416d6 GH-1149: Kafka Streams global config issues
When there are multiple functions, streamConfigGlobalProperties are overriden
for subsequent functions, after a binding specific config takes effect in a function.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1149
2021-09-24 10:01:26 -04:00
bono007
a4c38b3453 GH-1152: Property binding in Kafka Streams binder
Add default mappings provider for Kafka Streams (move kafka streams default mapping to new provider)

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1152
2021-09-23 13:19:35 -04:00
Soby Chacko
a8c948a6b2 GH-1148: Native changes required
KafkaBinderConfiguration class needs more pruning for native compilation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1148
2021-09-21 18:22:33 -04:00
Soby Chacko
d57091d791 GH-1145: Remove destroying producer factory
Remove the un-ncessary call to destroy the producer when checking for partitions.
This way, the producer is cached and reused at the first time data is produced.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1145
2021-09-16 14:46:37 -04:00
Gary Russell
2b7f6ecb96 Fix Doc Typos for KafkaBindingRebalanceListener 2021-09-15 17:35:17 -04:00
Soby Chacko
53d32c2332 GH-1140: CommonErrorHandler per consumer binding (#1143)
* GH-1140: CommonErrorHandler per consumer binding

Setting CommonErrorHandler on consumer binding through its bean name.
If present, binder will resolve this bean and assign it on the listener
container.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1140

* Addressing PR review
2021-09-14 10:17:32 -04:00
Soby Chacko
b7ebc185e7 GH-1141: Streams cleanup docs
Clarify streams cleanup default in the docs.
Effective from Spring Kafka 2.7, no cleanup will be performed on shutdown.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1141
2021-09-13 16:35:36 -04:00
Soby Chacko
56e25383f8 Fix wrong property in docs
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1137
2021-09-03 15:38:50 -04:00
Soby Chacko
b4c7c36229 GH-1135: Disable container retries when no DLQ set (#1136)
* GH-1135: Disable container retries when no DLQ set

Disable default container retries when binding retries are enabled
(maxAttempts > 1) and no DLQ set.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1135

* Address PR review comments

* Addressing PR review
2021-09-01 13:29:36 -04:00
Soby Chacko
6eed115cc9 Kafka Streams binder tests cleanup 2021-08-27 20:03:56 -04:00
Soby Chacko
8e6d07cc7b Update Kafka Streams branching docs/tests
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1133
2021-08-27 18:16:42 -04:00
Soby Chacko
adde49aab3 Refactoring Kafka Streams join tests
Coalesce the stream join tests in Kafka Streams binder around the functional model.
Remove duplicating the same join tests using the StreamListener model.
2021-08-25 15:45:38 -04:00
Soby Chacko
e500138486 Consumer/Producer prefix in Kafka Streams binder (#1131)
* Consumer/Producer prefix in Kafka Streams binder

Kafka Streams allows the applications to separate the consumer and producer
properties using consumer/producer prefixes. Add these prefixes automatically
if they are missing from the application.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1065

* Addressing PR review comments
2021-08-25 13:57:42 -04:00
Soby Chacko
970bac41bb Update pom.xml
Make the maven configuraiton consistent with core Spring Cloud Stream
2021-08-24 16:54:19 -04:00
Soby Chacko
afe39bf78a Kafka Streams binding lifecycle changes
Start Kafka Streams bindgings only when they are not running.
Similarly, stop them only if they are running.

Without these guards in the bindings for KStream, KTable and GlobalKTable,
it may cause NPE's due to the backing concurrent collections in KafkaStreamsRegistry
not finding the proper KafkaStreams object, especially when the StreamsBuilderFactory
bean is already stopped through the binder provided manager.
2021-08-24 16:53:23 -04:00
Derek Eskens
c1ad3006e9 Fixing consumer config typo in overview.adoc
ConsumerConfigCustomizer is the correct name of the interface.
2021-08-23 19:33:15 -04:00
Soby Chacko
ba2c3a05c9 GH-1129: Kafka Binder Metrics Improvements
Avoid blocking committed() call in KafkaBinderMetrics in a loop for
each topic partition.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1129
2021-08-23 16:14:45 -04:00
Soby Chacko
739b499966 Honoring auto startup in Kafka Streams binder (#1127)
* Honoring auto startup in Kafka Streams binder

When using Kafka Streams bineder, the processors are started
unconditionally, i.e. autoStartup is always true by default.
If spring.kafka.streams.auto-startup is set, then honor that
as the auto-startup flag in the binder.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1126

* Addressing PR review comments

Auto startup flag is honored per individual consumer bindings as well.

* Addressing PR review comments

* Making KafkaStreamsRegistry#kafkaStreams set to use a concurrent Set.
2021-08-23 14:18:05 -04:00
Soby Chacko
1b26f5d629 GH-1112: Fix OOM in DlqSender.sendToDlq
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1112
2021-08-20 14:57:15 -04:00
Soby Chacko
99c323e314 Document Kafka Streams and Sleuth integration
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1102
2021-08-16 20:45:08 -04:00
Soby Chacko
f4d3715317 Recycle KafkaStreams Objects
In the event Kafka Streams bindings are restarted (stop/start)
using the actuator bindings endpoints, the underlying KafkaStreams
objects are not recycled. After restarting, it still sees the previous
KafkaStreams object. Addressing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1119
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1120
2021-08-12 15:47:40 -04:00
Soby Chacko
d7907bbdcc Docs updates for Kakfa Streams skipAndContinue 2021-08-12 14:43:22 -04:00
buildmaster
9b5e735f74 Going back to snapshots 2021-07-30 15:12:01 +00:00
buildmaster
201668542b Update SNAPSHOT to 3.2.0-M1 2021-07-30 15:10:38 +00:00
Soby Chacko
a5f01f9d6f GH-1110: Kafka Streams state machine changes
Restore Kafka Streams error state behavior in the binder, equivalent
to Kafka Streams prior to 2.8. Starting with 2.8, users can customize
the way uncaught errors are interpreted via an UncaughtExceptionHandler
in the applicaiton. Binder now sets a default handler that shuts down
the Kafka Streams client if thre is an uncaught error.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
2021-07-29 13:35:08 -04:00
Soby Chacko
912c47e3ac Fix failing tests in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1109
2021-07-28 18:41:22 -04:00
Soby Chacko
001882de4e Upgrade versions
Spring Kafka: 2.8.0-M1
Spring Integration Kafka: 5.5.2
Kafka: 2.8.0

Ignore a few Kafka Streams binder tests temporarily.
2021-07-27 20:01:08 -04:00
Soby Chacko
54ac274ea3 GH-1085: Allow KTable binding on the outbound
At the moment, Kafka Streams binder only allows KStream bindings on the outbound.
There is a delegation mechanism in which we stil can use KStream for output binding
while allowing the applications to provide a KTable type as the function return type.

Update docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1085

Resolves #1105
2021-07-16 09:46:04 +02:00
Soby Chacko
80b707e5e9 Fixing Kafka binder tests.
* Adding a delay in testDlqWithNativeSerializationEnabledOnDlqProducer to avoid a race condition.
  Awaitility is used to wait for the proper condition in this test.

* In the MicroMeter registry tests, properly wait for the first message to arrive on the outbound
  so that the producer factory gets a chance to add the MicroMeter producer listener completely
  before the test assertions start.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1103
2021-07-14 18:54:16 -04:00
Soby Chacko
13474bdafb Fixing Kafka Binder tests 2021-07-13 19:39:04 -04:00
Soby Chacko
d0b4bdf438 Revert "Temporarily disabling KafkaBinderTests"
This reverts commit 0bedc606ce.
2021-07-13 15:44:28 -04:00
Soby Chacko
0bedc606ce Temporarily disabling KafkaBinderTests 2021-07-13 15:03:40 -04:00
Oleg Zhurakousky
b5cb32767b Fix POMs for 3.2 2021-07-13 16:50:01 +02:00
73 changed files with 2908 additions and 2677 deletions

View File

@@ -18,14 +18,6 @@ image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.sv
// ======================================================================================
//= Overview
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs.
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Apache Kafka Binder
=== Usage
@@ -50,812 +42,20 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
</dependency>
----
=== Overview
== Apache Kafka Streams Binder
The following image shows a simplified diagram of how the Apache Kafka binder operates:
=== Usage
.Kafka Binder
image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"]
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation.
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
In addition to support known Kafka producer properties, unknown producer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`.
Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas.
Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum.
+
Default: `-1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader::
Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader.
+
Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
This property is deprecated as of 3.1 in favor of using `ackMode`.
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
+
Default: `false`.
autoCommitOffset::
Starting with version 3.1, this property is deprecated.
See `ackMode` for more details on alternatives.
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
ackMode::
Specify the container ack mode.
This is based on the AckMode enumeration defined in Spring Kafka.
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
autoCommitOnError::
In pollable consumers, if set to `true`, it always auto commits on error.
If not set (the default) or false, it will not auto commit in pollable consumers.
Note that this property is only applicable for pollable consumers.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
See <<reset-offsets>> for more information about this property.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
See <<reset-offsets>> for more information about this property.
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`.
If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply.
i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`.
By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`.
These defaults can be overridden by setting the `startOffset` binding property.
There will be no committed offset(s) the first time the binding is started with a particular `group`.
The other condition where no committed offset exists is if the offset has been expired.
With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group.
See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information.
When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored.
Following are two use cases when this might be used.
1. Consuming from a compacted topic containing key/value pairs.
Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions.
2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running.
Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions.
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets: true` is ignored.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
In the case of a regular processor (`Function<String, String>` or `Function<Message<?>, Message<?>`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below.
`spring.cloud.stream.kafka.bindings.<output-binding-name>.producer.messageKeyExpression: headers['kafka_receivedMessageKey']`
There is an important caveat to keep in mind for reactive functions.
In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages.
You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
closeTimeout::
Timeout in number of seconds to wait for when closing the producer.
+
Default: `30`
allowNonTransactional::
Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process.
This property allows you to override that behavior.
If set to true, records published to this output binding will not be run in a transaction, unless one is already in process.
+
Default: `false`
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
[source,xml]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
=== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Micrometer library.
The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application.
The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application.
```
@Component
class NoOpBindingMeters {
NoOpBindingMeters(MeterRegistry registry) {
registry.config().meterFilter(
MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME));
}
}
```
More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here].
[[kafka-tombstones]]
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
====
[source, java]
----
@StreamListener(Sink.INPUT)
public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
@Payload(required = false) Customer customer) {
// customer is null if a tombstone record
...
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsusumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method.
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
[[admin-client-config-customization]]
=== Customizing AdminClient Configuration
As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`.
AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization.
Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer.
Here is an example of providing this customizer bean.
```
@Bean
public AdminClientConfigCustomizer adminClientConfigCustomizer() {
return props -> {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
};
}
```
= Appendices
[appendix]
[[building]]

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
</parent>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>

View File

@@ -11,8 +11,43 @@ image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.sv
// ======================================================================================
//= Overview
include::overview.adoc[]
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
== Apache Kafka Streams Binder
=== Usage
To use Apache Kafka Streams binder, you need to add `spring-cloud-stream-binder-kafka-streams` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
= Appendices
[appendix]

View File

@@ -9,7 +9,6 @@
|spring.cloud.stream.dynamic-destinations | `[]` | A list of destinations that can be bound dynamically. If set, only listed destinations can be bound.
|spring.cloud.stream.function.batch-mode | `false` |
|spring.cloud.stream.function.bindings | |
|spring.cloud.stream.function.definition | | Definition of functions to bind. If several functions need to be composed into one, use pipes (e.g., 'fooFunc\|barFunc')
|spring.cloud.stream.instance-count | `1` | The number of deployed instances of an application. Default: 1. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-count" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index | `0` | The instance id of the application: a number from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index" where 'foo' is the name of the binding.
|spring.cloud.stream.instance-index-list | | A list of instance id's from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index-list" where 'foo' is the name of the binding. This setting will override the one set in 'spring.cloud.stream.instance-index'
@@ -57,11 +56,6 @@
|spring.cloud.stream.metrics.schedule-interval | `60s` | Interval expressed as Duration for scheduling metrics snapshots publishing. Defaults to 60 seconds
|spring.cloud.stream.override-cloud-connectors | `false` | This property is only applicable when the cloud profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder detects a suitable bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and uses it for creating connections (usually through Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.
|spring.cloud.stream.pollable-source | `none` | A semi-colon delimited list of binding names of pollable sources. Binding names follow the same naming convention as functions. For example, name '...pollable-source=foobar' will be accessible as 'foobar-iin-0'' binding
|spring.cloud.stream.poller.cron | | Cron expression value for the Cron Trigger.
|spring.cloud.stream.poller.fixed-delay | `1000` | Fixed delay for default poller.
|spring.cloud.stream.poller.initial-delay | `0` | Initial delay for periodic triggers.
|spring.cloud.stream.poller.max-messages-per-poll | `1` | Maximum messages per poll for the default poller.
|spring.cloud.stream.poller.time-unit | | The TimeUnit to apply to delay values.
|spring.cloud.stream.sendto.destination | `none` | The name of the header used to determine the name of the output destination
|spring.cloud.stream.source | | A colon delimited string representing the names of the sources based on which source bindings will be created. This is primarily to support cases where source binding may be required without providing a corresponding Supplier. (e.g., for cases where the actual source of data is outside of scope of spring-cloud-stream - HTTP -> Stream)

View File

@@ -252,11 +252,32 @@ The input from the three partial functions which are `KStream`, `GlobalKTable`,
Input bindings are named as `enrichOrder-in-0`, `enrichOrder-in-1` and `enrichOrder-in-2` respectively. Output binding is named as `enrichOrder-out-0`.
With curried functions, you can virtually have any number of inputs. However, keep in mind that, anything more than a smaller number of inputs and partially applied functions for them as above in Java might lead to unreadable code.
Therefore if your Kafka Streams application requires more than a reasonably smaller number of input bindings and you want to use this functional model, then you may want to rethink your design and decompose the application appropriately.
Therefore if your Kafka Streams application requires more than a reasonably smaller number of input bindings, and you want to use this functional model, then you may want to rethink your design and decompose the application appropriately.
===== Output Bindings
Kafka Streams binder allows types of either `KStream` or `KTable` as output bindings.
Behind the scenes, the binder uses the `to` method on `KStream` to send the resultant records to the output topic.
If the application provides a `KTable` as output in the function, the binder still uses this technique by delegating to the `to` method of `KStream`.
For example both functions below will work:
```
@Bean
public Function<KStream<String, String>, KTable<String, String>> foo() {
return KStream::toTable;
};
}
@Bean
public Function<KTable<String, String>, KStream<String, String>> bar() {
return KTable::toStream;
}
```
===== Multiple Output Bindings
Kafka Streams allows to write outbound data into multiple topics. This feature is known as branching in Kafka Streams.
Kafka Streams allows writing outbound data into multiple topics. This feature is known as branching in Kafka Streams.
When using multiple output bindings, you need to provide an array of KStream (`KStream[]`) as the outbound return type.
Here is an example:
@@ -270,21 +291,30 @@ public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
}
----
The programming model remains the same, however the outbound parameterized type is `KStream[]`.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively for the function above.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array as three.
Note that in this example, we provide a `noDefaultBranch()`; if we have used `defaultBranch()` instead, that would have required an extra output binding, essentially returning a `KStream` array of length four.
===== Summary of Function based Programming Styles for Kafka Streams
@@ -383,8 +413,7 @@ The default output binding for this example becomes `curriedFoobar-out-0`.
====== Special note on using `KTable` as output in function composition
When using function composition, for intermediate functions, you can use `KTable` as output.
For instance, lets say you have the following two functions.
Lets say you have the following two functions.
```
@Bean
@@ -399,10 +428,7 @@ public Function<KTable<String, String>, KStream<String, String>> bar() {
}
```
You can compose them as `foo|bar` although foo's output is `KTable`.
In normal case, when you use `foo` as standalone, this will not work, as the binder does not support `KTable` as the final output.
Note that in the example above, bar's output is still a `KStream`.
We are only able to use `foo` which has a `KTable` output, since we are composing with another function that has `KStream` as its output.
You can compose them as `foo|bar`, but keep in mind that the second function (`bar` in this case) must have a `KTable` as input since the first function (`foo`) has `KTable` as output.
==== Imperative programming model.
@@ -974,7 +1000,7 @@ One important thing to keep in mind when providing an implementation for `DlqDes
This is because there is no way for the binder to infer the names of all the DLQ topics the implementation might send to.
Therefore, if you provide DLQ names using this strategy, it is the application's responsibility to ensure that those topics are created beforehand.
If `DlqDestinationResolver` is present in the application as a bean, that takes higher prcedence.
If `DlqDestinationResolver` is present in the application as a bean, that takes higher precedence.
If you do not want to follow this approach and rather provide a static DLQ name using configuration, you can set the following property.
[source]
@@ -1002,10 +1028,10 @@ public BiFunction<KStream<String, Long>, KTable<String, String>, KStream<String,
}
```
and you only want to enable DLQ on the first input binding and logAndSkip on the second binding, then you can do so on the consumer as below.
and you only want to enable DLQ on the first input binding and skipAndContinue on the second binding, then you can do so on the consumer as below.
`spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.deserializationExceptionHandler: sendToDlq`
`spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.deserializationExceptionHandler: logAndSkip`
`spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.deserializationExceptionHandler: skipAndContinue`
Setting deserialization exception handlers this way has a higher precedence than setting at the binder level.
@@ -1692,8 +1718,9 @@ spring.cloud.stream.bindings.enrichOrder-out-0.binder=kafka1 #kstream
=== State Cleanup
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
By default, no local state is cleaned up when the binding is stopped.
This is the same behavior effective from Spring Kafka version 2.7.
See https://docs.spring.io/spring-kafka/reference/html/#streams-config[Spring Kafka documentation] for more details.
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
=== Kafka Streams topology visualization
@@ -1853,6 +1880,165 @@ When there are multiple bindings present on a single function, invoking these op
This is because all the bindings on a single function are backed by the same `StreamsBuilderFactoryBean`.
Therefore, for the function above, either `function-in-0` or `function-out-0` will work.
=== Manually starting Kafka Streams processors
Spring Cloud Stream Kafka Streams binder offers an abstraction called `StreamsBuilderFactoryManager` on top of the `StreamsBuilderFactoryBean` from Spring for Apache Kafka.
This manager API is used for controlling the multiple `StreamsBuilderFactoryBean` per processor in a binder based application.
Therefore, when using the binder, if you manually want to control the auto starting of the various `StreamsBuilderFactoryBean` objects in the application, you need to use `StreamsBuilderFactoryManager`.
You can use the property `spring.kafka.streams.auto-startup` and set this to `false` in order to turn off auto starting of the processors.
Then, in the application, you can use something as below to start the processors using `StreamsBuilderFactoryManager`.
```
@Bean
public ApplicationRunner runner(StreamsBuilderFactoryManager sbfm) {
return args -> {
sbfm.start();
};
}
```
This feature is handy, when you want your application to start in the main thread and let Kafka Streams processors start separately.
For example, when you have a large state store that needs to be restored, if the processors are started normally as is the default case, this may block your application to start.
If you are using some sort of liveness probe mechanism (for example on Kubernetes), it may think that the application is down and attempt a restart.
In order to correct this, you can set `spring.kafka.streams.auto-startup` to `false` and follow the approach above.
Keep in mind that, when using the Spring Cloud Stream binder, you are not directly dealing with `StreamsBuilderFactoryBean` from Spring for Apache Kafka, rather `StreamsBuilderFactoryManager`, as the `StreamsBuilderFactoryBean` objects are internally managed by the binder.
=== Manually starting Kafka Streams processors selectively
While the approach laid out above will unconditionally apply auto start `false` to all the Kafka Streams processors in the application through `StreamsBuilderFactoryManager`, it is often desirable that only individually selected Kafka Streams processors are not auto started.
For instance, let us assume that you have three different functions (processors) in your application and for one of the processors, you do not want to start it as part of the application startup.
Here is an example of such a situation.
```
@Bean
public Function<KStream<?, ?>, KStream<?, ?>> process1() {
}
@Bean
public Consumer<KStream<?, ?>> process2() {
}
@Bean
public BiFunction<KStream<?, ?>, KTable<?, ?>, KStream<?, ?>> process3() {
}
```
In this scenario above, if you set `spring.kafka.streams.auto-startup` to `false`, then none of the processors will auto start during the application startup.
In that case, you have to programmatically start them as described above by calling `start()` on the underlying `StreamsBuilderFactoryManager`.
However, if we have a use case to selectively disable only one processor, then you have to set `auto-startup` on the individual binding for that processor.
Let us assume that we don't want our `process3` function to auto start.
This is a `BiFunction` with two input bindings - `process3-in-0` and `process3-in-1`.
In order to avoid auto start for this processor, you can pick any of these input bindings and set `auto-startup` on them.
It does not matter which binding you pick; if you wish, you can set `auto-startup` to `false` on both of them, but one will be sufficient.
Because they share the same factory bean, you don't have to set autoStartup to false on both bindings, but it probably makes sense to do so, for clarity.
Here is the Spring Cloud Stream property that you can use to disable auto startup for this processor.
```
spring.cloud.stream.bindings.process3-in-0.consumer.auto-startup: false
```
or
```
spring.cloud.stream.bindings.process3-in-1.consumer.auto-startup: false
```
Then, you can manually start the processor either using the REST endpoint or using the `BindingsEndpoint` API as shown below.
For this, you need to ensure that you have the Spring Boot actuator dependency on the classpath.
```
curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/process3-in-0
```
or
```
@Autowired
BindingsEndpoint endpoint;
@Bean
public ApplicationRunner runner() {
return args -> {
endpoint.changeState("process3-in-0", State.STARTED);
};
}
```
See https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#binding_visualization_control[this section] from the reference docs for more details on this mechanism.
NOTE: When controlling the bindings by disabling `auto-startup` as described in this section, please note that this is only available for consumer bindings.
In other words, if you use the producer binding, `process3-out-0`, that does not have any effect in terms of disabling the auto starting of the processor, although this producer binding uses the same `StreamsBuilderFactoryBean` as the consumer bindings.
=== Tracing using Spring Cloud Sleuth
When Spring Cloud Sleuth is on the classpath of a Spring Cloud Stream Kafka Streams binder based application, both its consumer and producer are automatically instrumented with tracing information.
However, in order to trace any application specific operations, those need to be explicitly instrumented by the user code.
This can be done by injecting the `KafkaStreamsTracing` bean from Spring Cloud Sleuth in the application and then invoke various Kafka Streams operations through this injected bean.
Here are some examples of using it.
```
@Bean
public BiFunction<KStream<String, Long>, KTable<String, String>, KStream<String, Long>> clicks(KafkaStreamsTracing kafkaStreamsTracing) {
return (userClicksStream, userRegionsTable) -> (userClicksStream
.transformValues(kafkaStreamsTracing.peek("span-1", (key, value) -> LOG.info("key/value: " + key + "/" + value)))
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.transform(kafkaStreamsTracing.map("span-2", (key, value) -> {
LOG.info("Click Info: " + value.getRegion() + "/" + value.getClicks());
return new KeyValue<>(value.getRegion(),
value.getClicks());
}))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum, Materialized.as(CLICK_UPDATES))
.toStream());
}
```
In the example above, there are two places where it adds explicit tracing instrumentation.
First, we are logging the key/value information from the incoming `KStream`.
When this information is logged, the associated span and trace IDs get logged as well so that a monitoring system can track them and correlate with the same span id.
Second, when we call a `map` operation, instead of calling it directly on the `KStream` class, we wrap it inside a `transform` operation and then call `map` from `KafkaStreamsTracing`.
In this case also, the logged message will contain the span ID and trace ID.
Here is another example, where we use the low-level transformer API for accessing the various Kafka Streams headers.
When spring-cloud-sleuth is on the classpath, all the tracing headers can also be accessed like this.
```
@Bean
public Function<KStream<String, String>, KStream<String, String>> process(KafkaStreamsTracing kafkaStreamsTracing) {
return input -> input.transform(kafkaStreamsTracing.transformer(
"transformer-1",
() -> new Transformer<String, String, KeyValue<String, String>>() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public KeyValue<String, String> transform(String key, String value) {
LOG.info("Headers: " + this.context.headers());
LOG.info("K/V:" + key + "/" + value);
// More transformations, business logic execution, etc. go here.
return KeyValue.pair(key, value);
}
@Override
public void close() {
}
}));
}
```
=== Configuration Options
This section contains the configuration options used by the Kafka Streams binder.
@@ -1862,6 +2048,8 @@ For common configuration options and properties pertaining to binder, refer to t
==== Kafka Streams Binder Properties
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
Any Kafka binder provided properties re-used in Kafka Streams binder must be prefixed with `spring.cloud.stream.kafka.streams.binder` instead of `spring.cloud.stream.kafka.binder`.
The only exception to this rule is when defining the Kafka bootstrap server property in which case either prefix works.
configuration::
Map with a key/value pair containing properties pertaining to Apache Kafka Streams API.
@@ -1906,7 +2094,7 @@ deserializationExceptionHandler::
Deserialization error handler type.
This handler is applied at the binder level and thus applied against all input binding in the application.
There is a way to control it in a more fine-grained way at the consumer binding level.
Possible values are - `logAndContinue`, `logAndFail` or `sendToDlq`
Possible values are - `logAndContinue`, `logAndFail`, `skipAndContinue` or `sendToDlq`
+
Default: `logAndFail`
@@ -1933,6 +2121,12 @@ Arbitrary consumer properties at the binder level.
producerProperties::
Arbitrary producer properties at the binder level.
includeStoppedProcessorsForHealthCheck::
When bindings for processors are stopped through actuator, then this processor will not participate in the health check by default.
Set this property to `true` to enable health check for all processors including the ones that are currently stopped through bindings actuator endpoint.
+
Default: false
==== Kafka Streams Producer Properties
The following properties are _only_ available for Kafka Streams producers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.`
@@ -2013,7 +2207,7 @@ Unlike the message channel based binder, Kafka Streams binder does not seek to b
deserializationExceptionHandler::
Deserialization error handler type.
This handler is applied per consumer binding as opposed to the binder level property described before.
Possible values are - `logAndContinue`, `logAndFail` or `sendToDlq`
Possible values are - `logAndContinue`, `logAndFail`, `skipAndContinue` or `sendToDlq`
+
Default: `logAndFail`

View File

@@ -151,6 +151,9 @@ Default: `false`.
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
This is true for both broker level certificates (`ssl.truststore.location` and `ssl.keystore.location`) and certificates intended for schema registry (`schema.registry.ssl.truststore.location` and `schema.registry.ssl.keystore.location`).
Keep in mind that the truststore and keystore classpath locations must be provided under `spring.cloud.stream.kafka.binder.configuration...`.
For example, `spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location`, ``spring.cloud.stream.kafka.binder.configuration.schema.registry.ssl.truststore.location`, etc.
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
@@ -218,7 +221,7 @@ Note that this property is only applicable for pollable consumers.
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
Must be false if a `KafkaBindingRebalanceListener` is provided; see <<rebalance-listener>>.
See <<reset-offsets>> for more information about this property.
+
Default: `false`.
@@ -321,6 +324,12 @@ When using a transactional binder, the offset of a recovered record (e.g. when r
Setting this property to `false` suppresses committing the offset of recovered record.
+
Default: true.
commonErrorHandlerBeanName::
`CommonErrorHandler` bean name to use per consumer binding.
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
@@ -442,18 +451,18 @@ Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
Default: `false`.
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
+
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
+
Default: null.
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -490,11 +499,11 @@ Default: `false`
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
===== Example: Setting `ackMode` to `MANUAL` and Relying on Manual Acknowledgement
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.ackMode` be set to `MANUAL`.
Use the corresponding input channel name for your example.
[source]
@@ -799,10 +808,10 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
=== Using a KafkaBindingRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
Starting with version 2.1, if you provide a single `KafkaBindingRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
@@ -849,13 +858,95 @@ public interface KafkaBindingRebalanceListener {
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
[[retry-and-dlq-processing]]
=== Retry and Dead Letter Processing
By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
There are situations where it is preferable to move this functionality to the listener container, such as:
* The aggregate of retries and delays will exceed the consumer's `max.poll.interval.ms` property, potentially causing a partition rebalance.
* You wish to publish the dead letter to a different Kafka cluster.
* You wish to add retry listeners to the error handler.
* ...
To configure moving this functionality from the binder to the container, define a `@Bean` of type `ListenerContainerWithDlqAndRetryCustomizer`.
This interface has the following methods:
====
[source, java]
----
/**
* Configure the container.
* @param container the container.
* @param destinationName the destination name.
* @param group the group.
* @param dlqDestinationResolver a destination resolver for the dead letter topic (if
* enableDlq).
* @param backOff the backOff using retry properties (if configured).
* @see #retryAndDlqInBinding(String, String)
*/
void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff);
/**
* Return false to move retries and DLQ from the binding to a customized error handler
* using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when
* configured via
* {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}.
* @param destinationName the destination name.
* @param group the group.
* @return true to disable retrie in the binding
*/
default boolean retryAndDlqInBinding(String destinationName, String group) {
return true;
}
----
====
The destination resolver and `BackOff` are created from the binding properties (if configured).
You can then use these to create a custom error handler and dead letter publisher; for example:
====
[source, java]
----
@Bean
ListenerContainerWithDlqAndRetryCustomizer cust(KafkaTemplate<?, ?> template) {
return new ListenerContainerWithDlqAndRetryCustomizer() {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff) {
if (destinationName.equals("topicWithLongTotalRetryConfig")) {
ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(template),
dlqDestinationResolver);
container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff));
}
}
@Override
public boolean retryAndDlqInBinding(String destinationName, String group) {
return !destinationName.contains("topicWithLongTotalRetryConfig");
}
};
}
----
====
Now, only a single retry delay needs to be greater than the consumer's `max.poll.interval.ms` property.
[[consumer-producer-config-customizer]]
=== Customizing Consumer and Producer configuration
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
you can implement the following customizers.
* ConsusumerConfigCustomizer
* ConsumerConfigCustomizer
* ProducerConfigCustomizer
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.

View File

@@ -44,6 +44,8 @@ include::partitions.adoc[]
include::kafka-streams.adoc[]
include::tips.adoc[]
= Appendices
[appendix]
include::building.adoc[]

View File

@@ -0,0 +1,865 @@
== Tips, Tricks and Recipes
=== Simple DLQ with Kafka
==== Problem Statement
As a developer, I want to write a consumer application that processes records from a Kafka topic.
However, if some error occurs in processing, I don't want the application to stop completely.
Instead, I want to send the record in error to a DLT (Dead-Letter-Topic) and then continue processing new records.
==== Solution
The solution for this problem is to use the DLQ feature in Spring Cloud Stream.
For the purposes of this discussion, let us assume that the following is our processor function.
```
@Bean
public Consumer<byte[]> processData() {
return s -> {
throw new RuntimeException();
};
```
This is a very trivial function that throws an exception for all the records that it processes, but you can take this function and extend it to any other similar situations.
In order to send the records in error to a DLT, we need to provide the following configuration.
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
```
In order to activate DLQ, the application must provide a group name.
Anonymous consumers cannot use the DLQ facilities.
We also need to enable DLQ by setting the `enableDLQ` property on the Kafka consumer binding to `true`.
Finally, we can optionally provide the DLT name by providing the `dlqName` on Kafka consumer binding, which otherwise default to `input-topic-dlq.my-group.error` in this case.
Note that in the example consumer provided above, the type of the payload is `byte[]`.
By default, the DLQ producer in Kafka binder expects the payload of type `byte[]`.
If that is not the case, then we need to provide the configuration for proper serializer.
For example, let us re-write the consumer function as below:
```
@Bean
public Consumer<String> processData() {
return s -> {
throw new RuntimeException();
};
}
```
Now, we need to tell Spring Cloud Stream, how we want to serialize the data when writing to the DLT.
Here is the modified configuration for this scenario:
```
spring.cloud.stream:
bindings:
processData-in-0:
group: my-group
destination: input-topic
kafka:
bindings:
processData-in-0:
consumer:
enableDlq: true
dlqName: input-topic-dlq
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
=== DLQ with Advanced Retry Options
==== Problem Statement
This is similar to the recipe above, but as a developer I would like to configure the way retries are handled.
==== Solution
If you followed the above recipe, then you get the default retry options built into the Kafka binder when the processing encounters an error.
By default, the binder retires for a maximum of 3 attempts with a one second initial delay, 2.0 multiplier with each back off with a max delay of 10 seconds.
You can change all these configurations as below:
```
spring.cloud.stream.bindings.processData-in-0.consumer.maxAtttempts
spring.cloud.stream.bindings.processData-in-0.consumer.backOffInitialInterval
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMultipler
spring.cloud.stream.bindings.processData-in-0.consumer.backOffMaxInterval
```
If you want, you can also provide a list of retryable exceptions by providing a map of boolean values.
For example,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalStateException=true
spring.cloud.stream.bindings.processData-in-0.consumer.retryableExceptions.java.lang.IllegalArgumentException=false
```
By default, any exceptions not listed in the map above will be retried.
If that is not desired, then you can disable that by providing,
```
spring.cloud.stream.bindings.processData-in-0.consumer.defaultRetryable=false
```
You can also provide your own `RetryTemplate` and mark it as `@StreamRetryTemplate` which will be scanned and used by the binder.
This is useful when you want more sophisticated retry strategies and policies.
If you have multiple `@StreamRetryTemplate` beans, then you can specify which one your binding wants by using the property,
```
spring.cloud.stream.bindings.processData-in-0.consumer.retry-template-name=<your-retry-template-bean-name>
```
=== Handling Deserialization errors with DLQ
==== Problem Statement
I have a processor that encounters a deserilzartion exception in Kafka consumer.
I would expect that the Spring Cloud Stream DLQ mechanism will catch that scenario, but it does not.
How can I handle this?
==== Solution
The normal DLQ mechanism offered by Spring Cloud Stream will not help when Kafka consumer throws an irrecoverable deserialization excepion.
This is because, this exception happens even before the consumer's `poll()` method returns.
Spring for Apache Kafka project offers some great ways to help the binder with this situation.
Let us explore those.
Assuming this is our function:
```
@Bean
public Consumer<String> functionName() {
return s -> {
System.out.println(s);
};
}
```
It is a trivial function that takes a `String` parameter.
We want to bypass the message converters provided by Spring Cloud Stream and want to use native deserializers instead.
In the case of `String` types, it does not make much sense, but for more complex types like AVRO etc. you have to rely on external deserializers and therefore want to delegate the conversion to Kafka.
Now when the consumer receives the data, let us assume that there is a bad record that causes a deserilziation errror, maybe someone passed an `Integer` instead of a `String` for example.
In that case, if you don't do something in the application, the excption will be propagated through the chain and your application will exit eventually.
In order to handle this, you can add a `ListenerContainerCustomizer` `@Bean` that configures a `SeekToCurrentErrorHandler`.
This `SeekToCurrentErrorHandler` is configured with a `DeadLetterPublishingRecoverer`.
We also need to configure an `ErrorHandlingDeserializer` for the consumer.
That sounds like a lot of complex things, but in reality, it boils down to these 3 beans in this case.
```
@Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
```
```
@Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
```
```
@Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
```
Let us analyze each of them.
The first one is the `ListenerContainerCustomizer` bean that takes a `SeekToCurrentErrorHandler`.
The container is now customized with that particular error handler.
You can learn more about container customization https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_advanced_consumer_configuration[here].
The second bean is the `SeekToCurrentErrorHandler` that is configured with a publishing to a `DLT`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#seek-to-current[here] for more details on `SeekToCurrentErrorHandler`.
The third bean is the `DeadLetterPublishingRecoverer` that is ultimately responsible for sending to the `DLT`.
By default, the `DLT` topic is named as the ORIGINAL_TOPIC_NAME.DLT.
You can change that though.
See the https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters[docs] for more details.
We also need to configure an https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handling-deserializer[ErrorHandlingDeserializer] through application config.
The `ErrorHandlingDeserializer` delegates to the actual deserializer.
In case of errors, it sets key/value of the record to be null and includes the raw bytes of the message.
It then sets the exception in a header and passes this record to the listener, which then calls the registered error handler.
Following is the configuration required:
```
spring.cloud.stream:
function:
definition: functionName
bindings:
functionName-in-0:
group: group-name
destination: input-topic
consumer:
use-native-decoding: true
kafka:
bindings:
functionName-in-0:
consumer:
enableDlq: true
dlqName: dlq-topic
dlqProducerProperties:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
configuration:
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
```
We are providing the `ErrorHandlingDeserializer` through the `configuration` property on the binding.
We are also indicating that the actual deserializer to delegate is the `StringDeserializer`.
Keep in mind that none of the dlq properties above are relevant for the discussions in this recipe.
They are purely meant for addressing any application level errors only.
=== Basic offset management in Kafka binder
==== Problem Statement
I want to write a Spring Cloud Stream Kafka consumer applicaiton and not sure about how it manages Kafka consumer offsets.
Can you exaplain?
==== Solution
We encourage you read the https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current/reference/html/spring-cloud-stream-binder-kafka.html#reset-offsets[docs] section on this to get a thorough understanding on it.
Here is it in a gist:
Kafka supports two types of offsets to start with by default - `earliest` and `latest`.
Their semantics are self-explanatory from their names.
Assuming you are running the consumer for the first time.
If you miss the group.id in your Spring Cloud Stream application, then it becomes an anonymous consumer.
Whenever, you have an anonymous consumer, in that case, Spring Cloud Stream application by default will start from the `latest` available offset in the topic partition.
On the other hand, if you explicitly specify a group.id, then by default, the Spring Cloud Stream application will start from the `earliest` available offset in the topic partiton.
In both cases above (consumers with explicit groups and anonymous groups), the starting offset can be switched around by using the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.startOffset` and setting it to either `earliest` or `latest`.
Now, assume that you already ran the consumer before and now starting it again.
In this case, the starting offset semantics in the above case do not apply as the consumer finds an already committed offset for the consumer group (In the case of an anonymous consumer, although the application does not provide a group.id, the binder will auto generate one for you).
It simply picks up from the last committed offset onward.
This is true, even when you have a `startOffset` value provided.
However, you can override the default behavior where the consumer starts from the last committed offset by using the `resetOffsets` property.
In order to do that, set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.resetOffsets` to `true` (which is `false` by default).
Then make sure you provide the `startOffset` value (either `earliest` or `latest`).
When you do that and then start the consumer application, each time you start, it starts as if this is starting for the first time and ignore any committed offsets for the partition.
=== Seeking to arbitrary offsets in Kafka
==== Problem Statement
Using Kafka binder, I know that it can set the offset to either `earliest` or `latest`, but I have a requirement to seek the offset to something in the middle, an arbitrary offset.
Is there a way to achieve this using Spring Cloud Stream Kafka biner?
==== Solution
Previously we saw how Kafka binder allows you to tackle basic offset management.
By default, the binder does not allow you to rewind to an arbitrary offset, at least through the mechanism we saw in that reipce.
However, there are some low-level strategies that the binder provides to achieve this use case.
Let's explore them.
First of all, when you want to reset to an arbitrary offset other than `earliest` or `latest`, make sure to leave the `resetOffsets` configuration to its defaults, which is `false`.
Then you have to provide a custom bean of type `KafkaBindingRebalanceListener`, which will be injected into all consumer bindings.
It is an interface that comes with a few default methods, but here is the method that we are interested in:
```
/**
* Invoked when partitions are initially assigned or after a rebalance. Applications
* might only want to perform seek operations on an initial assignment. While the
* 'initial' argument is true for each thread (when concurrency is greater than 1),
* implementations should keep track of exactly which partitions have been sought.
* There is a race in that a rebalance could occur during startup and so a topic/
* partition that has been sought on one thread may be re-assigned to another
* thread and you may not wish to re-seek it at that time.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment on the current thread.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions, boolean initial) {
// do nothing
}
```
Let us look at the details.
In essence, this method will be invoked each time during the initial assignment for a topic partition or after a rebalance.
For better illustration, let us assume that our topic is `foo` and it has 4 partitions.
Initially, we are only starting a single consumer in the group and this consumer will consume from all partitions.
When the consumer starts for the first time, all 4 partitions are getting initially assigned.
However, we do not want to start the partitions to consume at the defaults (`earliest` since we define a group), rather for each partition, we want them to consume after seeking to arbitrary offsets.
Imagine that you have a business case to consume from certain offsets as below.
```
Partition start offset
0 1000
1 2000
2 2000
3 1000
```
This could be achieved by implementing the above method as below.
```
@Override
public void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions, boolean initial) {
Map<TopicPartition, Long> topicPartitionOffset = new HashMap<>();
topicPartitionOffset.put(new TopicPartition("foo", 0), 1000L);
topicPartitionOffset.put(new TopicPartition("foo", 1), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 2), 2000L);
topicPartitionOffset.put(new TopicPartition("foo", 3), 1000L);
if (initial) {
partitions.forEach(tp -> {
if (topicPartitionOffset.containsKey(tp)) {
final Long offset = topicPartitionOffset.get(tp);
try {
consumer.seek(tp, offset);
}
catch (Exception e) {
// Handle excpetions carefully.
}
}
});
}
}
```
This is just a rudimentary implementation.
Real world use cases are much more complex than this and you need to adjust accordingly, but this certainly gives you a basic sketch.
When consumer `seek` fails, it may throw some runtime exceptions and you need to decide what to do in those cases.
==== What if we start a second consumer with the same group id?
When we add a second consumer, a rebalance will occur and some partitions will be moved around.
Let's say that the new consumer gets partitions `2` and `3`.
When this new Spring Cloud Stream consumer calls this `onPartitionsAssigned` method, it will see that this is the initial assignment for partititon `2` and `3` on this consumer.
Therefore, it will do the seek operation becuase of the conditional check on the `initial` argument.
In the case of the first consumer, it now only has partitons `0` and `1`
However, for this consumer it was simply a rebalance event and not considered as an intial assignment.
Thus, it will not re-seek to the given offsets because of the conditional check on the `initial` argument.
=== How do I manually acknowledge using Kafka binder?
==== Problem Statement
Using Kafka binder, I want to manually acknowledge messages in my consumer.
How do I do that?
==== Solution
By default, Kafka binder delegates to the default commit settings in Spring for Apache Kafka project.
The default `ackMode` in Spring Kafka is `batch`.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets[here] for more details on that.
There are situations in which you want to disable this default commit behavior and rely on manual commits.
Following steps allow you to do that.
Set the property `spring.cloud.stream.kafka.bindings.<binding-name>.consumer.ackMode` to either `MANUAL` or `MANUAL_IMMEDIATE`.
When it is set like that, then there will be a header called `kafka_acknowledgment` (from `KafkaHeaders.ACKNOWLEDGMENT`) present in the message received by the consumer method.
For example, imagine this as your consumer method.
```
@Bean
public Consumer<Message<String>> myConsumer() {
return msg -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
};
}
```
Then you set the property `spring.cloud.stream.bindings.myConsumer-in-0.consumer.ackMode` to `MANUAL` or `MANUAL_IMMEDIATE`.
=== How do I override the default binding names in Spring Cloud Stream?
==== Problem Statement
Spring Cloud Stream creates default bindings based on the function definition and signature, but how do I override these to more domain friendly names?
==== Solution
Assume that following is your function signature.
```
@Bean
public Function<String, String> uppercase(){
...
}
```
By default, Spring Cloud Stream will create the bindings as below.
1. uppercase-in-0
2. uppercase-out-0
You can override these bindings to something by using the following properties.
```
spring.cloud.stream.function.bindings.uppercase-in-0=my-transformer-in
spring.cloud.stream.function.bindings.uppercase-out-0=my-transformer-out
```
After this, all binding properties must be made on the new names, `my-transformer-in` and `my-transformer-out`.
Here is another example with Kafka Streams and multiple inputs.
```
@Bean
public BiFunction<KStream<String, Order>, KTable<String, Account>, KStream<String, EnrichedOrder>> processOrder() {
...
}
```
By default, Spring Cloud Stream will create three different binding names for this function.
1. processOrder-in-0
2. processOrder-in-1
3. processOrder-out-0
You have to use these binding names each time you want to set some configuration on these bindings.
You don't like that, and you want to use more domain-friendly and readable binding names, for example, something like.
1. orders
2. accounts
3. enrichedOrders
You can easily do that by simply setting these three properties
1. spring.cloud.stream.function.bindings.processOrder-in-0=orders
2. spring.cloud.stream.function.bindings.processOrder-in-1=accounts
3. spring.cloud.stream.function.bindings.processOrder-out-0=enrichedOrders
Once you do that, it overrides the default binding names and any properties that you want to set on them must be on these new binding names.
=== How do I send a message key as part of my record?
==== Problem Statement
I need to send a key along with the payload of the record, is there a way to do that in Spring Cloud Stream?
==== Solution
It is often necessary that you want to send associative data structure like a map as the record with a key and value.
Spring Cloud Stream allows you to do that in a straightforward manner.
Following is a basic blueprint for doing this, but you may want to adapt it to your paricular use case.
Here is sample producer method (aka `Supplier`).
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
This is a trivial function that sends a message with a `String` payload, but also with a key.
Note that we set the key as a message header using `KafkaHeaders.MESSAGE_KEY`.
If you want to change the key from the default `kafka_messageKey`, then in the configuration, we need to specify this property:
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.messageKeyExpression=headers['my-special-key']
```
Please note that we use the binding name `supplier-out-0` since that is our function name, please update accordingly.
Then, we use this new key when we produce the message.
=== How do I use native serializer and deserializer instead of message conversion done by Spring Cloud Stream?
==== Problem Statement
Instead of using the message converters in Spring Cloud Stream, I want to use native Serializer and Deserializer in Kafka.
By default, Spring Cloud Stream takes care of this conversion using its internal built-in message converters.
How can I bypass this and delegate the responsibility to Kafka?
==== Solution
This is really easy to do.
All you have to do is to provide the following property to enable native serialization.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.useNativeEncoding: true
```
Then, you need to also set the serailzers.
There are a couple of ways to do this.
```
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
or using the binder configuration.
```
spring.cloud.stream.kafka.binder.configurarion.key.serializer: org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.binder.configurarion.value.serializer: org.apache.kafka.common.serialization.StringSerializer
```
When using the binder way, it is applied against all the bindings whereas setting them at the bindings are per binding.
On the deserializing side, you just need to provide the deserializers as configuration.
For example,
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.configurarion.key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.kafka.bindings.<binding-name>.producer.configurarion.value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
```
You can also set them at the binder level.
There is an optional property that you can set to force native decoding.
```
spring.cloud.stream.kafka.bindings.<binding-name>.consumer.useNativeDecoding: true
```
However, in the case of Kafka binder, this is unncessary, as by the time it reaches the binder, Kafka already deserializes them using the configured deserializers.
=== Explain how offset resetting work in Kafka Streams binder
==== Problem Statement
By default, Kafka Streams binder always starts from the earliest offset for a new consumer.
Sometimes, it is beneficial or required by the application to start from the latest offset.
Kafka Streams binder allows you to do that.
==== Solution
Before we look at the solution, let us look at the following scenario.
```
@Bean
public BiConsumer<KStream<Object, Object>, KTable<Object, Object>> myBiConsumer{
(s, t) -> s.join(t, ...)
...
}
```
We have a `BiConsumer` bean that requires two input bindings.
In this case, the first binding is for a `KStream` and the second one is for a `KTable`.
When running this application for the first time, by default, both bindings start from the `earliest` offset.
What about I want to start from the `latest` offset due to some requirements?
You can do this by enabling the following properties.
```
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-0.consumer.startOffset: latest
spring.cloud.stream.kafka.streams.bindings.myBiConsumer-in-1.consumer.startOffset: latest
```
If you want only one binding to start from the `latest` offset and the other to consumer from the default `earliest`, then leave the latter binding out from the configuration.
Keep in mind that, once there are committed offsets, these setting are *not* honored and the committed offsets take precedence.
=== Keeping track of successful sending of records (producing) in Kafka
==== Problem Statement
I have a Kafka producer application and I want to keep track of all my successful sedings.
==== Solution
Let us assume that we have this following supplier in the application.
```
@Bean
public Supplier<Message<String>> supplier() {
return () -> MessageBuilder.withPayload("foo").setHeader(KafkaHeaders.MESSAGE_KEY, "my-foo").build();
}
```
Then, we need to define a new `MessageChannel` bean to capture all the successful send information.
```
@Bean
public MessageChannel fooRecordChannel() {
return new DirectChannel();
}
```
Next, define this property in the application configuration to provide the bean name for the `recordMetadataChannel`.
```
spring.cloud.stream.kafka.bindings.supplier-out-0.producer.recordMetadataChannel: fooRecordChannel
```
At this point, successful sent information will be sent to the `fooRecordChannel`.
You can write an `IntegrationFlow` as below to see the information.
```
@Bean
public IntegrationFlow integrationFlow() {
return f -> f.channel("fooRecordChannel")
.handle((payload, messageHeaders) -> payload);
}
```
In the `handle` method, the payload is what got sent to Kafka and the message headers contain a special key called `kafka_recordMetadata`.
Its value is a `RecordMetadata` that contains information about topic partition, current offset etc.
=== Adding custom header mapper in Kafka
==== Problem Statement
I have a Kafka producer application that sets some headers, but they are missing in the consumer application. Why is that?
==== Solution
Under normal circumstances, this should be fine.
Imagine, you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("foo", "bar").build();
}
```
On the consumer side, you should still see the header "foo", and the following should not give you any issues.
```
@Bean
public Consumer<Message<String>> consume() {
return s -> {
final String foo = (String)s.getHeaders().get("foo");
System.out.println(foo);
};
}
```
If you provide a https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.3/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties[custom header mapper] in the application, then this won't work.
Let's say you have an empty `KafkaHeaderMapper` in the application.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
}
};
}
```
If that is your implementation, then you will miss the `foo` header on the consumer.
Chances are that, you may have some logic inside those `KafkaHeaderMapper` methods.
You need the following to populate the `foo` header.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String foo = (String) headers.get("foo");
target.add("foo", foo.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header foo = source.lastHeader("foo");
target.put("foo", new String(foo.value()));
}
}
```
That will properly populate the `foo` header from the producer to consumer.
==== Special note on the id header
In Spring Cloud Stream, the `id` header is a special header, but some applications may want to have special custom id headers - something like `custom-id` or `ID` or `Id`.
The first one (`custom-id`) will propagate without any custom header mapper from producer to consumer.
However, if you produce with a variant of the framework reserved `id` header - such as `ID`, `Id`, `iD` etc. then you will run into issues with the internals of the framework.
See this https://stackoverflow.com/questions/68412600/change-the-behaviour-in-spring-cloud-stream-make-header-matcher-case-sensitive[StackOverflow thread] fore more context on this use case.
In that case, you must use a custom `KafkaHeaderMapper` to map the case-sensitive id header.
For example, let's say you have the following producer.
```
@Bean
public Supplier<Message<String>> supply() {
return () -> MessageBuilder.withPayload("foo").setHeader("Id", "my-id").build();
}
```
The header `Id` above will be gone from the consuming side as it clashes with the framework `id` header.
You can provide a custom `KafkaHeaderMapper` to solve this issue.
```
@Bean
public KafkaHeaderMapper kafkaBinderHeaderMapper1() {
return new KafkaHeaderMapper() {
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final String myId = (String) headers.get("Id");
target.add("Id", myId.getBytes());
}
@Override
public void toHeaders(Headers source, Map<String, Object> target) {
final Header Id = source.lastHeader("Id");
target.put("Id", new String(Id.value()));
}
};
}
```
By doing this, both `id` and `Id` headers will be available from the producer to the consumer side.
=== Producing to multiple topics in transaction
==== Problem Statement
How do I produce transactional messages to multiple Kafka topics?
For more context, see this https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow question].
==== Solution
Use transactional support in Kafka binder for transactions and then provide an `AfterRollbackProcessor`.
In order to produce to multiple topics, use `StreamBridge` API.
Below are the code snippets for this:
```
@Autowired
StreamBridge bridge;
@Bean
Consumer<String> input() {
return str -> {
System.out.println(str);
this.bridge.send("left", str.toUpperCase());
this.bridge.send("right", str.toLowerCase());
if (str.equals("Fail")) {
throw new RuntimeException("test");
}
};
}
@Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BinderFactory binders) {
return (container, dest, group) -> {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTemplate<byte[], byte[]> template = new KafkaTemplate<>(pf);
DefaultAfterRollbackProcessor rollbackProcessor = rollbackProcessor(template);
container.setAfterRollbackProcessor(rollbackProcessor);
};
}
DefaultAfterRollbackProcessor rollbackProcessor(KafkaTemplate<byte[], byte[]> template) {
return new DefaultAfterRollbackProcessor<>(
new DeadLetterPublishingRecoverer(template), new FixedBackOff(2000L, 2L), template, true);
}
```
==== Required Configuration
```
spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix: tx-
spring.cloud.stream.kafka.binder.required-acks=all
spring.cloud.stream.bindings.input-in-0.group=foo
spring.cloud.stream.bindings.input-in-0.destination=input
spring.cloud.stream.bindings.left.destination=left
spring.cloud.stream.bindings.right.destination=right
spring.cloud.stream.kafka.bindings.input-in-0.consumer.maxAttempts=1
```
in order to test, you can use the following:
```
@Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
System.in.read();
template.send("input", "Fail".getBytes());
template.send("input", "Good".getBytes());
};
}
```
Some important notes:
Please ensure that you don't have any DLQ settings on the application configuration as we manually configure DLT (By default it will be published to a topic named `input.DLT` based on the initial consumer function).
Also, reset the `maxAttempts` on consumer binding to `1` in order to avoid retries by the binder.
It will be max tried a total of 3 in the example above (initial try + the 2 attempts in the `FixedBackoff`).
See the https://stackoverflow.com/questions/68928091/dlq-bounded-retry-and-eos-when-producing-to-multiple-topics-using-spring-cloud[StackOverflow thread] for more details on how to test this code.
If you are using Spring Cloud Stream to test it by adding more consumer functions, make sure to set the `isolation-level` on the consumer binding to `read-committed`.
This https://stackoverflow.com/questions/68941306/spring-cloud-stream-database-transaction-does-not-roll-back[StackOverflow thread] is also related to this discussion.
=== Pitfalls to avoid when running multiple pollable consumers
==== Problem Statement
How can I run multiple instances of the pollable consumers and generate unique `client.id` for each instance?
==== Solution
Assuming that I have the following definition:
```
spring.cloud.stream.pollable-source: foo
spring.cloud.stream.bindings.foo-in-0.group: my-group
```
When running the application, the Kafka consumer generates a client.id (something like `consumer-my-group-1`).
For each instance of the application that is running, this `client.id` will be the same, causing unexpected issues.
In order to fix this, you can add the following property on each instance of the application:
```
spring.cloud.stream.kafka.bindings.foo-in-0.consumer.configuration.client.id=${client.id}
```
See this https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1139[GitHub issue] for more details.

193
pom.xml
View File

@@ -2,21 +2,29 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>3.0.3</version>
<version>3.1.0</version>
<relativePath />
</parent>
<scm>
<url>https://github.com/spring-cloud/spring-cloud-stream-binder-kafka</url>
<connection>scm:git:git://github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</connection>
<developerConnection>
scm:git:ssh://git@github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</developerConnection>
<tag>HEAD</tag>
</scm>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.6.8</spring-kafka.version>
<spring-integration-kafka.version>5.4.7</spring-integration-kafka.version>
<kafka.version>2.6.2</kafka.version>
<spring-cloud-schema-registry.version>1.1.4-SNAPSHOT</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.4-SNAPSHOT</spring-cloud-stream.version>
<spring-kafka.version>2.8.0</spring-kafka.version>
<spring-integration-kafka.version>5.5.5</spring-integration-kafka.version>
<kafka.version>3.0.0</kafka.version>
<spring-cloud-stream.version>3.2.1</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
@@ -27,7 +35,7 @@
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
<module>docs</module>
</modules>
</modules>
<dependencyManagement>
<dependencies>
@@ -119,12 +127,6 @@
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<version>${spring-cloud-schema-registry.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
@@ -139,10 +141,6 @@
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
@@ -165,6 +163,16 @@
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<compilerArgument>-parameters</compilerArgument>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
@@ -175,74 +183,91 @@
<profiles>
<profile>
<id>spring</id>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
<profile>
<id>coverage</id>
<activation>
<property>
<name>env.TRAVIS</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>
<executions>
<execution>
<id>agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</pluginRepository>
</pluginRepositories>
<reporting>
<plugins>
<plugin>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015-2018 the original author or authors.
* Copyright 2015-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -163,24 +163,44 @@ public class KafkaBinderConfigurationProperties {
private void moveCertsToFileSystemIfNecessary() {
try {
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.keystore.location", fileSystemLocation);
}
moveBrokerCertsIfApplicable();
moveSchemaRegistryCertsIfApplicable();
}
catch (Exception e) {
throw new IllegalStateException(e);
}
}
private void moveBrokerCertsIfApplicable() throws IOException {
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("ssl.keystore.location", fileSystemLocation);
}
}
private void moveSchemaRegistryCertsIfApplicable() throws IOException {
String trustStoreLocation = this.configuration.get("schema.registry.ssl.truststore.location");
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("schema.registry.ssl.truststore.location", fileSystemLocation);
}
final String keyStoreLocation = this.configuration.get("schema.registry.ssl.keystore.location");
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
// Overriding the value with absolute filesystem path.
this.configuration.put("schema.registry.ssl.keystore.location", fileSystemLocation);
}
}
private String moveCertToFileSystem(String classpathLocation, String fileSystemLocation) throws IOException {
File targetFile;
final String tempDir = System.getProperty("java.io.tmpdir");

View File

@@ -210,6 +210,12 @@ public class KafkaConsumerProperties {
*/
private boolean txCommitRecovered = true;
/**
* CommonErrorHandler bean name per consumer binding.
* @since 3.2
*/
private String commonErrorHandlerBeanName;
/**
* @return if each record needs to be acknowledged.
*
@@ -529,4 +535,11 @@ public class KafkaConsumerProperties {
this.txCommitRecovered = txCommitRecovered;
}
public String getCommonErrorHandlerBeanName() {
return commonErrorHandlerBeanName;
}
public void setCommonErrorHandlerBeanName(String commonErrorHandlerBeanName) {
this.commonErrorHandlerBeanName = commonErrorHandlerBeanName;
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2019 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -142,4 +142,22 @@ public class KafkaBinderConfigurationPropertiesTest {
assertThat(configuration.get("ssl.keystore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
}
@Test
public void testCertificateFilesAreMovedForSchemaRegistryConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
configuration.put("schema.registry.ssl.truststore.location", "classpath:testclient.truststore");
configuration.put("schema.registry.ssl.keystore.location", "classpath:testclient.keystore");
kafkaBinderConfigurationProperties.setCertificateStoreDirectory("target");
kafkaBinderConfigurationProperties.getKafkaConnectionString();
assertThat(configuration.get("schema.registry.ssl.truststore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.truststore").toString());
assertThat(configuration.get("schema.registry.ssl.keystore.location")).isEqualTo(
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
</parent>
<properties>
@@ -73,12 +73,7 @@
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
</dependency>
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<scope>test</scope>
</dependency>
<!-- Following dependency is only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -161,6 +161,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
@@ -173,6 +175,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
}
@@ -267,9 +271,9 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
Assert.state(!bindingConfig.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG),
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
streamConfigGlobalProperties.putAll(bindingConfig);
// We will only add the per binding configuration to the current streamConfiguration and not the global one.
streamConfiguration
.putAll(extendedConsumerProperties.getConfiguration());
.putAll(bindingConfig);
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.

View File

@@ -0,0 +1,51 @@
/*
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
import org.springframework.cloud.stream.config.BindingHandlerAdvise.MappingsProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* {@link EnableAutoConfiguration Auto-configuration} for extended binding metadata for Kafka Streams.
*
* @author Chris Bono
* @since 3.2
*/
@Configuration(proxyBeanMethods = false)
public class ExtendedBindingHandlerMappingsProviderAutoConfiguration {
@Bean
public MappingsProvider kafkaStreamsExtendedPropertiesDefaultMappingsProvider() {
return () -> {
Map<ConfigurationPropertyName, ConfigurationPropertyName> mappings = new HashMap<>();
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.bindings"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
return mappings;
};
}
}

View File

@@ -16,6 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -58,16 +60,18 @@ public class GlobalKTableBinder extends
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
private final KafkaStreamsRegistry kafkaStreamsRegistry;
// @checkstyle:on
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
@@ -97,10 +101,32 @@ public class GlobalKTableBinder extends
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
GlobalKTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
super.stop();
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
GlobalKTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
GlobalKTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}

View File

@@ -59,10 +59,11 @@ public class GlobalKTableBinderConfiguration {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsRegistry kafkaStreamsRegistry) {
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue);
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2018-2020 the original author or authors.
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -30,6 +30,8 @@ import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
@@ -51,6 +53,7 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Renwei Han
* @author Serhii Siryi
* @author Nico Pommerening
* @since 2.1.0
*/
public class InteractiveQueryService {
@@ -91,15 +94,16 @@ public class InteractiveQueryService {
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
KafkaStreams contextSpecificKafkaStreams = getThreadContextSpecificKafkaStreams();
return retryTemplate.execute(context -> {
T store = null;
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
Throwable throwable = null;
while (iterator.hasNext()) {
if (contextSpecificKafkaStreams != null) {
try {
store = iterator.next().store(storeName, storeType);
store = contextSpecificKafkaStreams.store(
StoreQueryParameters.fromNameAndType(
storeName, storeType));
}
catch (InvalidStateStoreException e) {
// pass through..
@@ -109,10 +113,56 @@ public class InteractiveQueryService {
if (store != null) {
return store;
}
throw new IllegalStateException("Error when retrieving state store: " + storeName, throwable);
else if (contextSpecificKafkaStreams != null) {
LOG.warn("Store " + storeName
+ " could not be found in Streams context, falling back to all known Streams instances");
}
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
while (iterator.hasNext()) {
try {
store = iterator.next()
.store(StoreQueryParameters.fromNameAndType(
storeName, storeType));
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
if (store != null) {
return store;
}
throw new IllegalStateException(
"Error when retrieving state store: " + storeName,
throwable);
});
}
/**
* Retrieves the current {@link KafkaStreams} context if executing Thread is created by a Streams App (contains a matching application id in Thread's name).
*
* @return KafkaStreams instance associated with Thread
*/
private KafkaStreams getThreadContextSpecificKafkaStreams() {
return this.kafkaStreamsRegistry.getKafkaStreams().stream()
.filter(this::filterByThreadName).findAny().orElse(null);
}
/**
* Checks if the supplied {@link KafkaStreams} instance belongs to the calling Thread by matching the Thread's name with the Streams Application Id.
*
* @param streams {@link KafkaStreams} instance to filter
* @return true if Streams Instance is associated with Thread
*/
private boolean filterByThreadName(KafkaStreams streams) {
String applicationId = kafkaStreamsRegistry.streamBuilderFactoryBean(
streams).getStreamsConfiguration()
.getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
// TODO: is there some better way to find out if a Stream App created the Thread?
return Thread.currentThread().getName().contains(applicationId);
}
/**
* Gets the current {@link HostInfo} that the calling kafka streams application is
* running on.

View File

@@ -22,6 +22,8 @@ import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.processor.StreamPartitioner;
@@ -78,16 +80,19 @@ class KStreamBinder extends
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver) {
KeyValueSerdeResolver keyValueSerdeResolver, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
@@ -125,10 +130,32 @@ class KStreamBinder extends
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
super.stop();
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@@ -178,10 +205,32 @@ class KStreamBinder extends
return false;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
super.stop();
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}

View File

@@ -59,10 +59,10 @@ public class KStreamBinderConfiguration {
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties, KafkaStreamsRegistry kafkaStreamsRegistry) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver);
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver, kafkaStreamsRegistry);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return kStreamBinder;

View File

@@ -16,6 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
@@ -44,13 +46,10 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
*/
class KTableBinder extends
// @checkstyle:off
AbstractBinder<KTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements
ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
@@ -62,12 +61,15 @@ class KTableBinder extends
// @checkstyle:on
private final KafkaStreamsRegistry kafkaStreamsRegistry;
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue, KafkaStreamsRegistry kafkaStreamsRegistry) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
@Override
@@ -100,10 +102,32 @@ class KTableBinder extends
return true;
}
@Override
public synchronized void start() {
if (!streamsBuilderFactoryBean.isRunning()) {
super.start();
KTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
}
}
}
@Override
public synchronized void stop() {
super.stop();
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
if (streamsBuilderFactoryBean.isRunning()) {
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
super.stop();
KTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
KTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
}
}
};
}
@@ -111,9 +135,7 @@ class KTableBinder extends
@Override
protected Binding<KTable<Object, Object>> doBindProducer(String name,
KTable<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
throw new UnsupportedOperationException(
"No producer level binding is allowed for KTable");
}

View File

@@ -59,9 +59,10 @@ public class KTableBinderConfiguration {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties,
KafkaStreamsRegistry kafkaStreamsRegistry) {
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue);
kafkaTopicProvisioner, kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kTableBinder;
}

View File

@@ -19,6 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.time.Duration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
@@ -31,8 +32,8 @@ import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.TaskMetadata;
import org.apache.kafka.streams.processor.ThreadMetadata;
import org.apache.kafka.streams.TaskMetadata;
import org.apache.kafka.streams.ThreadMetadata;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
@@ -118,7 +119,12 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
}
else {
boolean up = true;
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
Set<KafkaStreams> allKafkaStreams = new HashSet<>(kafkaStreams);
if (this.configurationProperties.isIncludeStoppedProcessorsForHealthCheck()) {
allKafkaStreams.addAll(kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().values());
}
for (KafkaStreams kStream : allKafkaStreams) {
if (isKafkaStreams25) {
up &= kStream.state().isRunningOrRebalancing();
}
@@ -156,7 +162,8 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
}
if (isRunningResult) {
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
final Set<ThreadMetadata> threadMetadata = kafkaStreams.metadataForLocalThreads();
for (ThreadMetadata metadata : threadMetadata) {
perAppdIdDetails.put("threadName", metadata.threadName());
perAppdIdDetails.put("threadState", metadata.threadState());
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
@@ -172,8 +179,19 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
}
else {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
details.put(applicationId, String.format("The processor with application.id %s is down", applicationId));
String applicationId = null;
if (streamsBuilderFactoryBean != null) {
applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
}
else {
final Map<String, KafkaStreams> stoppedKafkaStreamsPerBinding = kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams();
for (String appId : stoppedKafkaStreamsPerBinding.keySet()) {
if (stoppedKafkaStreamsPerBinding.get(appId).equals(kafkaStreams)) {
applicationId = appId;
}
}
}
details.put(applicationId, String.format("The processor with application.id %s is down. Current state: %s", applicationId, kafkaStreams.state()));
}
return details;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2020 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -98,6 +98,9 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
private static final String GLOBALKTABLE_BINDER_TYPE = "globalktable";
private static final String CONSUMER_PROPERTIES_PREFIX = "consumer.";
private static final String PRODUCER_PROPERTIES_PREFIX = "producer.";
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(
@@ -266,14 +269,15 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
properties.putAll(configProperties.getConfiguration());
}
Map<String, Object> mergedConsumerConfig = configProperties.mergedConsumerConfiguration();
if (!ObjectUtils.isEmpty(mergedConsumerConfig)) {
properties.putAll(mergedConsumerConfig);
}
Map<String, Object> mergedProducerConfig = configProperties.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedProducerConfig)) {
properties.putAll(mergedProducerConfig);
}
Map<String, Object> mergedConsumerConfig = new HashMap<>(configProperties.mergedConsumerConfiguration());
//Adding consumer. prefix if they are missing (in order to differentiate them from other property categories such as stream, producer etc.)
addPrefix(properties, mergedConsumerConfig, CONSUMER_PROPERTIES_PREFIX);
Map<String, Object> mergedProducerConfig = new HashMap<>(configProperties.mergedProducerConfiguration());
//Adding producer. prefix if they are missing (in order to differentiate them from other property categories such as stream, consumer etc.)
addPrefix(properties, mergedProducerConfig, PRODUCER_PROPERTIES_PREFIX);
if (!properties.containsKey(StreamsConfig.REPLICATION_FACTOR_CONFIG)) {
properties.put(StreamsConfig.REPLICATION_FACTOR_CONFIG,
(int) configProperties.getReplicationFactor());
@@ -282,6 +286,16 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
Collectors.toMap((e) -> String.valueOf(e.getKey()), Map.Entry::getValue));
}
private void addPrefix(Properties properties, Map<String, Object> mergedConsProdConfig, String prefix) {
Map<String, Object> mergedConfigs = new HashMap<>();
for (String key : mergedConsProdConfig.keySet()) {
mergedConfigs.put(key.startsWith(prefix) ? key : prefix + key, mergedConsProdConfig.get(key));
}
if (!ObjectUtils.isEmpty(mergedConfigs)) {
properties.putAll(mergedConfigs);
}
}
@Bean
public KStreamStreamListenerResultAdapter kstreamStreamListenerResultAdapter() {
return new KStreamStreamListenerResultAdapter();
@@ -398,8 +412,8 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
@Nullable KafkaStreamsMicrometerListener listener) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
@Nullable KafkaStreamsMicrometerListener listener, KafkaProperties kafkaProperties) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener, kafkaProperties);
}
@Bean

View File

@@ -26,6 +26,7 @@ import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
@@ -54,12 +55,16 @@ public class KafkaStreamsBindingInformationCatalogue {
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> consumerPropertiesPerSbfb = new HashMap<>();
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
private final Map<Object, String> bindingNamesPerTarget = new HashMap<>();
private final Map<String, KafkaStreams> previousKafkaStreamsPerApplicationId = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ProducerFactory<byte[], byte[]>>> dlqProducerFactories = new HashMap<>();
/**
@@ -137,11 +142,19 @@ public class KafkaStreamsBindingInformationCatalogue {
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
}
void addConsumerPropertiesPerSbfb(StreamsBuilderFactoryBean streamsBuilderFactoryBean, ConsumerProperties consumerProperties) {
this.consumerPropertiesPerSbfb.computeIfAbsent(streamsBuilderFactoryBean, k -> new ArrayList<>());
this.consumerPropertiesPerSbfb.get(streamsBuilderFactoryBean).add(consumerProperties);
}
public Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> getConsumerPropertiesPerSbfb() {
return this.consumerPropertiesPerSbfb;
}
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
return this.streamsBuilderFactoryBeanPerBinding;
}
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
this.outboundKStreamResolvables.put(key, outboundResolvable);
}
@@ -203,4 +216,35 @@ public class KafkaStreamsBindingInformationCatalogue {
}
producerFactories.add(producerFactory);
}
/**
* Caching the previous KafkaStreams for the applicaiton.id when binding is stopped through actuator.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @param applicationId application.id
* @param kafkaStreams {@link KafkaStreams} object
*/
public void addPreviousKafkaStreamsForApplicationId(String applicationId, KafkaStreams kafkaStreams) {
this.previousKafkaStreamsPerApplicationId.put(applicationId, kafkaStreams);
}
/**
* Remove the previously cached KafkaStreams object.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @param applicationId application.id
*/
public void removePreviousKafkaStreamsForApplicationId(String applicationId) {
this.previousKafkaStreamsPerApplicationId.remove(applicationId);
}
/**
* Get all stopped KafkaStreams objects through actuator binding stop.
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
*
* @return stopped KafkaStreams objects map
*/
public Map<String, KafkaStreams> getStoppedKafkaStreams() {
return this.previousKafkaStreamsPerApplicationId;
}
}

View File

@@ -39,6 +39,7 @@ import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
@@ -298,7 +299,13 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType, (KStream) result, outboundDefinitionIterator);
if (KTable.class.isAssignableFrom(result.getClass())) {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), ((KTable) result).toStream(), outboundDefinitionIterator);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType, (KStream) result, outboundDefinitionIterator);
}
}
}
}
@@ -337,8 +344,14 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), (KStream) result, outboundDefinitionIterator);
if (KTable.class.isAssignableFrom(result.getClass())) {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), ((KTable) result).toStream(), outboundDefinitionIterator);
}
else {
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
outboundResolvableType : resolvableType.getGeneric(1), (KStream) result, outboundDefinitionIterator);
}
}
}
}
@@ -516,6 +529,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =

View File

@@ -17,12 +17,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
@@ -37,9 +37,9 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
*/
public class KafkaStreamsRegistry {
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
private final Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new ConcurrentHashMap<>();
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
private final Set<KafkaStreams> kafkaStreams = ConcurrentHashMap.newKeySet();
Set<KafkaStreams> getKafkaStreams() {
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();
@@ -62,6 +62,11 @@ public class KafkaStreamsRegistry {
this.streamsBuilderFactoryBeanMap.put(kafkaStreams, streamsBuilderFactoryBean);
}
void unregisterKafkaStreams(KafkaStreams kafkaStreams) {
this.kafkaStreams.remove(kafkaStreams);
this.streamsBuilderFactoryBeanMap.remove(kafkaStreams);
}
/**
*
* @param kafkaStreams {@link KafkaStreams} object

View File

@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.lang.reflect.Method;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
@@ -320,6 +321,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(inboundName, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(inboundName));
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
@@ -372,12 +376,12 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
builder = Stores
.windowStoreBuilder(
Stores.persistentWindowStore(spec.getName(),
spec.getRetention(), 3, spec.getLength(), false),
Duration.ofMillis(spec.getRetention()), Duration.ofMillis(3), false),
keySerde, valueSerde);
break;
case SESSION:
builder = Stores.sessionStoreBuilder(Stores.persistentSessionStore(
spec.getName(), spec.getRetention()), keySerde, valueSerde);
spec.getName(), Duration.ofMillis(spec.getRetention())), keySerde, valueSerde);
break;
default:
throw new UnsupportedOperationException(

View File

@@ -16,9 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.context.SmartLifecycle;
import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
@@ -38,7 +44,7 @@ import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
*
* @author Soby Chacko
*/
class StreamsBuilderFactoryManager implements SmartLifecycle {
public class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
@@ -50,19 +56,23 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
private volatile boolean running;
private final KafkaProperties kafkaProperties;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener) {
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener,
KafkaProperties kafkaProperties) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
this.listener = listener;
this.kafkaProperties = kafkaProperties;
}
@Override
public boolean isAutoStartup() {
return true;
return this.kafkaProperties == null || this.kafkaProperties.getStreams().isAutoStartup();
}
@Override
@@ -79,13 +89,24 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
if (this.listener != null) {
streamsBuilderFactoryBean.addListener(this.listener);
}
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
// By default, we shut down the client if there is an uncaught exception in the application.
// Users can override this by customizing SBFB. See this issue for more details:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
// Starting the stream.
final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> bindingServicePropertiesPerSbfb =
this.kafkaStreamsBindingInformationCatalogue.getConsumerPropertiesPerSbfb();
final List<ConsumerProperties> consumerProperties = bindingServicePropertiesPerSbfb.get(streamsBuilderFactoryBean);
final boolean autoStartupDisabledOnAtLeastOneConsumerBinding = consumerProperties.stream().anyMatch(consumerProperties1 -> !consumerProperties1.isAutoStartup());
if (!autoStartupDisabledOnAtLeastOneConsumerBinding) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);

View File

@@ -145,7 +145,8 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
}
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
outboundArgument.getRawClass().isAssignableFrom(KStream.class))) {
(outboundArgument.getRawClass().isAssignableFrom(KStream.class) ||
outboundArgument.getRawClass().isAssignableFrom(KTable.class)))) { //Allowing both KStream and KTable on the outbound.
// if the type is array, we need to do a late binding as we don't know the number of
// output bindings at this point in the flow.
@@ -157,12 +158,15 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
if (outputBindingsIter.hasNext()) {
outputBinding = outputBindingsIter.next();
}
}
else {
outputBinding = String.format("%s-%s-0", this.functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX);
}
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
// We will only allow KStream targets on the outbound. If the user provides a KTable,
// we still use the KStreamBinder to send it through the outbound.
// In that case before sending, we do a cast from KTable to KStream.
// See KafkaStreamsFunctionsProcessor#setupFunctionInvokerForKafkaStreams for details.
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
new BoundTargetHolder(getBindingTargetFactory(KStream.class)
.createOutput(outputBinding), true));

View File

@@ -19,14 +19,17 @@ package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@@ -94,6 +97,7 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
final List<String> collect = concat.collect(Collectors.toList());
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
collect.removeIf(Pattern.compile(".*_registration").asPredicate());
onlySingleFunction = collect.size() == 1;
collect.stream()
@@ -107,6 +111,9 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
final String definition = streamFunctionProperties.getDefinition();
final String[] functionUnits = StringUtils.hasText(definition) ? definition.split(";") : new String[]{};
final Set<String> kafkaStreamsMethodNames = new HashSet<>(kafkaStreamsOnlyResolvableTypes.keySet());
kafkaStreamsMethodNames.addAll(this.resolvableTypeMap.keySet());
if (functionUnits.length == 0) {
for (String s : getResolvableTypes().keySet()) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(s)};
@@ -123,21 +130,30 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
ResolvableType[] resolvableTypes = new ResolvableType[composedFunctions.length];
int i = 0;
boolean nonKafkaStreamsFunctionsFound = false;
for (String split : composedFunctions) {
derivedNameFromComposed = derivedNameFromComposed.concat(split);
resolvableTypes[i++] = getResolvableTypes().get(split);
if (!kafkaStreamsMethodNames.contains(split)) {
nonKafkaStreamsFunctionsFound = true;
break;
}
}
if (!nonKafkaStreamsFunctionsFound) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, derivedNameFromComposed, resolvableTypes, rootBeanDefinition);
}
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, derivedNameFromComposed, resolvableTypes, rootBeanDefinition);
}
else {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(functionUnit)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, functionUnit, resolvableTypes, rootBeanDefinition);
// Ensure that the function unit is a Kafka Streams function
if (kafkaStreamsMethodNames.contains(functionUnit)) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(functionUnit)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, functionUnit, resolvableTypes, rootBeanDefinition);
}
}
}
}

View File

@@ -73,15 +73,16 @@ public class KafkaStreamsFunctionProcessorInvoker {
}
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(derivedNameFromComposed[0])).findFirst();
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(composedFunctions[0]),
derivedNameFromComposed[0], proxyFactory.get(), methods.get(derivedNameFromComposed[0]), resolvableTypeMap.get(composedFunctions[composedFunctions.length - 1]), composedFunctions);
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(composedFunctions[0]),
derivedNameFromComposed[0], kafkaStreamsBindableProxyFactory, methods.get(derivedNameFromComposed[0]), resolvableTypeMap.get(composedFunctions[composedFunctions.length - 1]), composedFunctions));
}
else {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(functionUnit)).findFirst();
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(functionUnit), functionUnit,
proxyFactory.get(), methods.get(functionUnit), null);
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(functionUnit), functionUnit,
kafkaStreamsBindableProxyFactory, methods.get(functionUnit), null));
}
}
}

View File

@@ -74,6 +74,7 @@ public class KafkaStreamsBinderConfigurationProperties
*/
private DeserializationExceptionHandler deserializationExceptionHandler;
private boolean includeStoppedProcessorsForHealthCheck;
public Map<String, Functions> getFunctions() {
return functions;
@@ -127,6 +128,14 @@ public class KafkaStreamsBinderConfigurationProperties
this.deserializationExceptionHandler = deserializationExceptionHandler;
}
public boolean isIncludeStoppedProcessorsForHealthCheck() {
return includeStoppedProcessorsForHealthCheck;
}
public void setIncludeStoppedProcessorsForHealthCheck(boolean includeStoppedProcessorsForHealthCheck) {
this.includeStoppedProcessorsForHealthCheck = includeStoppedProcessorsForHealthCheck;
}
public static class StateStoreRetry {
private int maxAttempts = 1;

View File

@@ -75,7 +75,9 @@ import org.springframework.util.MimeTypeUtils;
* @param <T> type of the object to marshall
* @author Soby Chacko
* @since 3.0
* @deprecated in favor of other schema registry providers instead of Spring Cloud Schema Registry. See its motivation above.
*/
@Deprecated
public class MessageConverterDelegateSerde<T> implements Serde<T> {
private static final String VALUE_CLASS_HEADER = "valueClass";

View File

@@ -1,4 +1,5 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.ExtendedBindingHandlerMappingsProviderAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpointAutoConfiguration

View File

@@ -0,0 +1,83 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.junit.jupiter.api.Test;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.runner.ApplicationContextRunner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Tests for {@link ExtendedBindingHandlerMappingsProviderAutoConfiguration}.
*/
class ExtendedBindingHandlerMappingsProviderAutoConfigurationTests {
private final ApplicationContextRunner contextRunner = new ApplicationContextRunner()
.withUserConfiguration(KafkaStreamsTestApp.class)
.withPropertyValues(
"spring.cloud.stream.kafka.streams.default.consumer.application-id: testApp123",
"spring.cloud.stream.kafka.streams.default.consumer.consumed-as: default-consumer",
"spring.cloud.stream.kafka.streams.default.consumer.materialized-as: default-materializer",
"spring.cloud.stream.kafka.streams.default.producer.produced-as: default-producer",
"spring.cloud.stream.kafka.streams.default.producer.key-serde: default-foo");
@Test
void defaultsUsedWhenNoCustomBindingProperties() {
this.contextRunner.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "default-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "default-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@Test
void defaultsRespectedWhenCustomBindingProperties() {
this.contextRunner
.withPropertyValues(
"spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.consumed-as: custom-consumer",
"spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.produced-as: custom-producer")
.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "custom-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "custom-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@EnableAutoConfiguration
static class KafkaStreamsTestApp {
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2020 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -27,9 +28,11 @@ import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyQueryMetadata;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
@@ -37,6 +40,7 @@ import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.mockito.Mockito;
@@ -51,6 +55,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -66,6 +71,7 @@ import static org.mockito.internal.verification.VerificationModeFactory.times;
/**
* @author Soby Chacko
* @author Gary Russell
* @author Nico Pommerening
*/
public class KafkaStreamsInteractiveQueryIntegrationTests {
@@ -103,6 +109,9 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
kafkaStreamsRegistry.registerKafkaStreams(mock);
Mockito.when(mock.isRunning()).thenReturn(true);
Properties mockProperties = new Properties();
mockProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "fooApp");
Mockito.when(mock.getStreamsConfiguration()).thenReturn(mockProperties);
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
@@ -117,10 +126,12 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
Mockito.verify(mockKafkaStreams, times(3))
.store(StoreQueryParameters.fromNameAndType("foo", storeType));
}
@Test
@Ignore
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -208,7 +219,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(),
.groupByKey(Grouped.with(new Serdes.IntegerSerde(),
new JsonSerde<>(Product.class)))
.count(Materialized.as("prod-id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(null,
@@ -220,6 +231,11 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
return new Foo(interactiveQueryService);
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
static class Foo {
InteractiveQueryService interactiveQueryService;

View File

@@ -16,6 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import java.util.Map;
import java.util.Properties;
import java.util.function.Consumer;
import org.apache.kafka.common.security.JaasUtils;
@@ -31,8 +33,11 @@ import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
/**
* @author Soby Chacko
*/
@@ -96,6 +101,43 @@ public class KafkaStreamsBinderBootstrapTest {
applicationContext.close();
}
@Test
@SuppressWarnings("unchecked")
public void testStreamConfigGlobalProperties_GH1149() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=input1;input2;input3",
"--spring.cloud.stream.kafka.streams.bindings.input1-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.configuration.spring.json.value.type.method=com.test.MyClass",
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
// Make sure that global stream configs do not contain individual binding config set on second function.
assertThat(streamConfigGlobalProperties.containsKey("spring.json.value.type.method")).isFalse();
// Make sure that only input2 function gets the specific binding property set on it.
final StreamsBuilderFactoryBean input1SBFB = applicationContext.getBean("&stream-builder-input1", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration1 = input1SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration1.containsKey("spring.json.value.type.method")).isFalse();
final StreamsBuilderFactoryBean input2SBFB = applicationContext.getBean("&stream-builder-input2", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration2 = input2SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration2.containsKey("spring.json.value.type.method")).isTrue();
final StreamsBuilderFactoryBean input3SBFB = applicationContext.getBean("&stream-builder-input3", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration3 = input3SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration3.containsKey("spring.json.value.type.method")).isFalse();
applicationContext.close();
}
@SpringBootApplication
static class SimpleKafkaStreamsApplication {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -48,6 +49,9 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
@@ -179,22 +183,30 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings("unchecked")
@SuppressWarnings({"unchecked"})
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
@@ -51,6 +52,7 @@ import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.StreamsBuilderFactoryManager;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
@@ -73,7 +75,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1", "counts-2");
"counts", "counts-1", "counts-2", "counts-5", "counts-6");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@@ -89,7 +91,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2", "counts-5", "counts-6");
}
@AfterClass
@@ -111,6 +113,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunction",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.consumerProperties.request.timeout.ms=29000", //for testing ...binder.consumerProperties
"--spring.cloud.stream.kafka.streams.binder.consumerProperties.consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer",
"--spring.cloud.stream.kafka.streams.binder.producerProperties.max.block.ms=90000", //for testing ...binder.producerProperties
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -143,8 +146,9 @@ public class KafkaStreamsBinderWordCountFunctionTests {
//verify that ...binder.consumerProperties and ...binder.producerProperties work.
Map<String, Object> streamConfigGlobalProperties = (Map<String, Object>) context.getBean("streamConfigGlobalProperties");
assertThat(streamConfigGlobalProperties.get("request.timeout.ms")).isEqualTo("29000");
assertThat(streamConfigGlobalProperties.get("max.block.ms")).isEqualTo("90000");
assertThat(streamConfigGlobalProperties.get("consumer.request.timeout.ms")).isEqualTo("29000");
assertThat(streamConfigGlobalProperties.get("consumer.value.deserializer")).isEqualTo("org.apache.kafka.common.serialization.StringDeserializer");
assertThat(streamConfigGlobalProperties.get("producer.max.block.ms")).isEqualTo("90000");
InputBindingLifecycle inputBindingLifecycle = context.getBean(InputBindingLifecycle.class);
final Collection<Binding<Object>> inputBindings = (Collection<Binding<Object>>) new DirectFieldAccessor(inputBindingLifecycle)
@@ -174,22 +178,23 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
@Test
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.bindings.process-in-0.destination=words-5",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-5",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-5", "counts-5");
}
}
@@ -201,6 +206,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunctionWithCustomProducerStreamPartitioner",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
@@ -232,6 +238,90 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@Test
public void testKstreamBinderAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.kafka.streams.auto-startup=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-3",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-3",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryManager streamsBuilderFactoryManager = context.getBean(StreamsBuilderFactoryManager.class);
assertThat(streamsBuilderFactoryManager.isAutoStartup()).isFalse();
assertThat(streamsBuilderFactoryManager.isRunning()).isFalse();
}
}
@Test
public void testKstreamIndividualBindingAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-4",
"--spring.cloud.stream.bindings.process-in-0.consumer.auto-startup=false",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-4",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean.isRunning()).isFalse();
streamsBuilderFactoryBean.start();
assertThat(streamsBuilderFactoryBean.isRunning()).isTrue();
}
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-6",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-6",
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-6");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-6");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
@@ -310,7 +400,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
@@ -339,4 +429,20 @@ public class KafkaStreamsBinderWordCountFunctionTests {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
@EnableAutoConfiguration
static class OutboundNullApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
return input -> input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foobar-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -41,7 +41,13 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -70,7 +76,7 @@ public class StreamToGlobalKTableFunctionTests {
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.function.bindings.process-in-0=order",
@@ -89,7 +95,44 @@ public class StreamToGlobalKTableFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-2.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("process-in-0");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-2");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,

View File

@@ -16,10 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
@@ -38,17 +40,28 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.StreamJoined;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -173,9 +186,8 @@ public class StreamToTableJoinFunctionTests {
}
}
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
@@ -187,6 +199,8 @@ public class StreamToTableJoinFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
@@ -256,13 +270,37 @@ public class StreamToTableJoinFunctionTests {
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("process-out-0");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
}
finally {
consumer.close();
}
}
@Test
// @Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -398,6 +436,34 @@ public class StreamToTableJoinFunctionTests {
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
/**
* Tuple for a region and its associated number of clicks.
*/
@@ -439,9 +505,14 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.toStream()));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -456,9 +527,14 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.toStream());
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -475,4 +551,29 @@ public class StreamToTableJoinFunctionTests {
}
}
@EnableAutoConfiguration
public static class TrivialKTableApp {
public java.util.function.Consumer<KTable<String, String>> process() {
return inputTable -> inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
@EnableAutoConfiguration
public static class JoinProcessor {
public BiConsumer<KStream<String, String>, KStream<String, String>> testProcessor() {
return (input1Stream, input2Stream) -> input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(Duration.ofMillis(5)),
StreamJoined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
@@ -24,9 +25,9 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -254,8 +255,8 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000)).count(Materialized.as("foo-WordCounts-x"))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000))).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));
}

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
@@ -261,7 +262,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id, value));
}

View File

@@ -25,6 +25,7 @@ import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.apache.kafka.streams.kstream.KStream;
import org.assertj.core.util.Lists;
import org.junit.Assert;
@@ -45,6 +46,9 @@ import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.KafkaStreamsCustomizer;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -178,7 +182,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
embeddedKafka.consumeFromEmbeddedTopics(consumer, topics);
KafkaTestUtils.getRecords(consumer, 1000);
TimeUnit.SECONDS.sleep(2);
TimeUnit.SECONDS.sleep(5);
checkHealth(context, expected);
}
finally {
@@ -281,6 +285,19 @@ public class KafkaStreamsBinderHealthIndicatorTests {
});
}
@Bean
public StreamsBuilderFactoryBeanConfigurer customizer() {
return factoryBean -> {
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
@Override
public void customize(KafkaStreams kafkaStreams) {
kafkaStreams.setUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
}
});
};
}
}
public interface KafkaStreamsProcessorX {

View File

@@ -26,9 +26,9 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -42,6 +42,8 @@ import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -162,11 +164,16 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts")).toStream()
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts-tKWCWSIAP0")).toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key, value)));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
static class WordCount {

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
@@ -23,9 +24,9 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -130,9 +131,9 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> {
return new KeyValue<>(key.key().id, value);

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,6 +21,7 @@ import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.Properties;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -42,10 +43,6 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
@@ -57,7 +54,6 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -66,11 +62,11 @@ import static org.assertj.core.api.Assertions.assertThat;
* @author Soby Chacko
* @author Gary Russell
*/
public class KafkaStreamsBinderWordCountIntegrationTests {
public class KafkaStreamsBinderTombstoneTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1");
"counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -85,7 +81,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts-1");
}
@AfterClass
@@ -93,31 +89,6 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
consumer.close();
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
}
}
@Test
public void testSendToTombstone()
throws Exception {
@@ -127,24 +98,22 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words-1",
"--spring.cloud.stream.bindings.output.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
assertThat(kafkaStreams).isNotNull();
// Ensure that concurrency settings are mapped to number of stream task
@@ -200,26 +169,21 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class WordCountProcessorApplication {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
@Bean
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))));
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean

View File

@@ -25,9 +25,9 @@ import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -175,7 +175,7 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts-x"))
.toStream().map((key, value) -> new KeyValue<>(null,
"Count for " + key.key() + " : " + value));

View File

@@ -16,6 +16,7 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
@@ -257,7 +258,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
public StoreBuilder mystore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("mystate",
3L, 3, 3L, false), Serdes.String(),
Duration.ofMillis(3), Duration.ofMillis(3), false), Serdes.String(),
Serdes.String());
}
}

View File

@@ -16,15 +16,16 @@
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@@ -108,7 +109,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();
assertThat(cleanup.cleanupOnStop()).isTrue();
assertThat(cleanup.cleanupOnStop()).isFalse();
}
finally {
context.close();
@@ -137,9 +138,9 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
return input.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(new JsonSerde<>(Product.class),
.groupByKey(Grouped.with(new JsonSerde<>(Product.class),
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.windowedBy(TimeWindows.of(Duration.ofMillis(5000)))
.count(Materialized.as("id-count-store")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id,
"Count for product with ID 123: " + value));

View File

@@ -1,146 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class OutboundValueNullSkippedConversionTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(
OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class OutboundNullApplication {
@StreamListener
@SendTo("output")
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
@Input("input") KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -37,8 +37,8 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.function.context.converter.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.function.context.converter.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
@@ -146,7 +146,7 @@ public class PerRecordAvroContentTypeTests {
// Convert the byte[] received back to avro object and verify that it is
// the same as the one we sent ^^.
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
Message<?> receivedMessage = MessageBuilder.withPayload(value)
.setHeader("contentType",

View File

@@ -1,383 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToGlobalKTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("input");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-y");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer,
true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
customer.setName("customer-" + i);
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct,
true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
final Product product = new Product();
product.setName("product-" + i);
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
for (long i = 0; i < 5; i++) {
final Order order = new Order();
order.setCustomerId(i);
order.setProductId(i);
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
JsonDeserializer.class);
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders
.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i))
.isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i))
.isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
pfOrder.destroy();
consumer.close();
}
finally {
context.close();
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(
@Input("input") KStream<Long, Order> ordersStream,
@Input("input-x") GlobalKTable<Long, Customer> customers,
@Input("input-y") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
customers, (orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
return customerId;
}
public void setCustomerId(long customerId) {
this.customerId = customerId;
}
public long getProductId() {
return productId;
}
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class Product {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}
public Customer getCustomer() {
return customer;
}
public void setCustomer(Customer customer) {
this.customer = customer;
}
public Order getOrder() {
return order;
}
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
this.customer = customer;
this.order = order;
}
long productId() {
return order.getProductId();
}
}
}

View File

@@ -1,497 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("output");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L), new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L), new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L), new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L), new KeyValue<>("fang", 99L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest()
throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
// Produce data first to the input topic to test the startOffset setting on the
// binding (which is set to earliest below).
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-2");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
// Thread.sleep(10000L);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-2");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
new KeyValue<>("bob", 4L), new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L), new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L), new KeyValue<>("fang", 99L));
for (KeyValue<String, Long> keyValue : userClicks1) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 56L),
new KeyValue<>("asia", 124L),
// 1000 alice entries which were there in the topic before the
// consumer started.
// Since we set the startOffset to earliest for the topic, it will
// read them,
// but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
System.out.println("foobar: " + record.key() + "::" + record.value());
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
// TODO: Matched count is 3 and not 4 (expectedClicksPerRegion.size()) when running with full suite. Investigate why.
// TODO: This behavior is only observed after the Spring Kafka upgrade to 2.5.0 and kafka client to 2.5.
// TODO: Note that the test passes fine as a single test.
assertThat(count).matches(
matchedCount -> matchedCount == expectedClicksPerRegion.size() - 1 || matchedCount == expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).containsAnyElementsOf(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(
TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
public static class CountClicksPerRegionApplication {
@StreamListener
@SendTo("output")
public KStream<String, Long> process(
@Input("input") KStream<String, Long> userClicksStream,
@Input("input-x") KTable<String, String> userRegionsTable) {
return userClicksStream
.leftJoin(userRegionsTable,
(clicks, region) -> new RegionWithClicks(
region == null ? "UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream();
}
//This forces the state stores to be cleaned up before running the test.
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class TrivialKTableApp {
@StreamListener("input-y")
public void process(KTable<String, String> inputTable) {
inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
@Input("input-x")
KTable<?, ?> inputX();
}
interface KafkaStreamsProcessorY {
@Input("input-y")
KTable<?, ?> inputY();
}
/**
* Tuple for a region and its associated number of clicks.
*/
private static final class RegionWithClicks {
private final String region;
private final long clicks;
RegionWithClicks(String region, long clicks) {
if (region == null || region.isEmpty()) {
throw new IllegalArgumentException("region must be set");
}
if (clicks < 0) {
throw new IllegalArgumentException("clicks must not be negative");
}
this.region = region;
this.clicks = clicks;
}
public String getRegion() {
return region;
}
public long getClicks() {
return clicks;
}
}
interface BindingsForTwoKStreamJoinTest {
String INPUT_1 = "input_1";
String INPUT_2 = "input_2";
@Input(INPUT_1)
KStream<String, String> input_1();
@Input(INPUT_2)
KStream<String, String> input_2();
}
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
@EnableAutoConfiguration
public static class JoinProcessor {
@StreamListener
public void testProcessor(
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
Joined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -1,236 +0,0 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
*/
public class WordCountMultipleBranchesIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@StreamListener("input")
@SendTo({ "output1", "output2", "output3" })
@SuppressWarnings("unchecked")
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value).windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-multi")).toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -21,8 +21,8 @@ import java.util.Map;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.function.context.converter.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.function.context.converter.avro.AvroSchemaServiceManagerImpl;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;

View File

@@ -1,74 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.UUID;
import com.example.Sensor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.Test;
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.converter.MessageConverter;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Refer {@link MessageConverterDelegateSerde} for motivations.
*
* @author Soby Chacko
*/
public class MessageConverterDelegateSerdeTest {
@Test
@SuppressWarnings("unchecked")
public void testCompositeNonNativeSerdeUsingAvroContentType() {
Random random = new Random();
Sensor sensor = new Sensor();
sensor.setId(UUID.randomUUID().toString() + "-v1");
sensor.setAcceleration(random.nextFloat() * 10);
sensor.setVelocity(random.nextFloat() * 100);
sensor.setTemperature(random.nextFloat() * 50);
List<MessageConverter> messageConverters = new ArrayList<>();
messageConverters.add(new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl()));
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
messageConverters, new ObjectMapper());
MessageConverterDelegateSerde messageConverterDelegateSerde = new MessageConverterDelegateSerde(
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
Map<String, Object> configs = new HashMap<>();
configs.put("valueClass", Sensor.class);
configs.put("contentType", "application/avro");
messageConverterDelegateSerde.configure(configs, false);
final byte[] serialized = messageConverterDelegateSerde.serializer().serialize(null,
sensor);
final Object deserialized = messageConverterDelegateSerde.deserializer()
.deserialize(null, serialized);
assertThat(deserialized).isEqualTo(sensor);
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.4-SNAPSHOT</version>
<version>3.2.1</version>
</parent>
<dependencies>
@@ -75,6 +75,11 @@
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -201,10 +201,12 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
for (AbstractMessageListenerContainer<?, ?> container : listenerContainers) {
Map<String, Object> containerDetails = new HashMap<>();
boolean isRunning = container.isRunning();
if (!isRunning) {
boolean isOk = container.isInExpectedState();
if (!isOk) {
status = Status.DOWN;
}
containerDetails.put("isRunning", isRunning);
containerDetails.put("isStoppedAbnormally", !isRunning && !isOk);
containerDetails.put("isPaused", container.isContainerPaused());
containerDetails.put("listenerId", container.getListenerId());
containerDetails.put("groupId", container.getGroupId());

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2020 the original author or authors.
* Copyright 2016-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -208,10 +208,11 @@ public class KafkaBinderMetrics
Map<TopicPartition, Long> endOffsets = metadataConsumer
.endOffsets(topicPartitions);
final Map<TopicPartition, OffsetAndMetadata> committedOffsets = metadataConsumer.committed(endOffsets.keySet());
for (Map.Entry<TopicPartition, Long> endOffset : endOffsets
.entrySet()) {
OffsetAndMetadata current = metadataConsumer
.committed(endOffset.getKey());
OffsetAndMetadata current = committedOffsets.get(endOffset.getKey());
lag += endOffset.getValue();
if (current != null) {
lag -= current.offset();

View File

@@ -32,6 +32,7 @@ import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.BiFunction;
import java.util.function.Predicate;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
@@ -75,6 +76,7 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerPro
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.cloud.stream.binding.DefaultPartitioningInterceptor;
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
@@ -102,12 +104,15 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
import org.springframework.kafka.listener.ConsumerProperties;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultAfterRollbackProcessor;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.ExponentialBackOffWithMaxRetries;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.ProducerListener;
@@ -397,9 +402,6 @@ public class KafkaMessageChannelBinder extends
List<PartitionInfo> partitionsFor = producer
.partitionsFor(destination.getName());
producer.close();
if (transMan == null) {
((DisposableBean) producerFB).destroy();
}
return partitionsFor;
}, destination.getName());
this.topicsInUse.put(destination.getName(),
@@ -421,6 +423,10 @@ public class KafkaMessageChannelBinder extends
((PartitioningInterceptor) interceptor)
.setPartitionCount(partitions.size());
}
else if (interceptor instanceof DefaultPartitioningInterceptor) {
((DefaultPartitioningInterceptor) interceptor)
.setPartitionCount(partitions.size());
}
});
}
@@ -733,14 +739,23 @@ public class KafkaMessageChannelBinder extends
kafkaMessageDrivenChannelAdapter.setApplicationContext(applicationContext);
ErrorInfrastructure errorInfrastructure = registerErrorInfrastructure(destination,
consumerGroup, extendedConsumerProperties);
ListenerContainerCustomizer<?> customizer = getContainerCustomizer();
if (!extendedConsumerProperties.isBatchMode()
&& extendedConsumerProperties.getMaxAttempts() > 1
&& transMan == null) {
kafkaMessageDrivenChannelAdapter
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter
.setRecoveryCallback(errorInfrastructure.getRecoverer());
if (!(customizer instanceof ListenerContainerWithDlqAndRetryCustomizer)
|| ((ListenerContainerWithDlqAndRetryCustomizer) customizer)
.retryAndDlqInBinding(destination.getName(), group)) {
kafkaMessageDrivenChannelAdapter
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter
.setRecoveryCallback(errorInfrastructure.getRecoverer());
}
if (!extendedConsumerProperties.getExtension().isEnableDlq()) {
messageListenerContainer.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(0L, 0L)));
}
}
else if (!extendedConsumerProperties.isBatchMode() && transMan != null) {
messageListenerContainer.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(
@@ -773,17 +788,49 @@ public class KafkaMessageChannelBinder extends
else {
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorInfrastructure.getErrorChannel());
}
this.getContainerCustomizer().configure(messageListenerContainer, destination.getName(), group);
final String commonErrorHandlerBeanName = extendedConsumerProperties.getExtension().getCommonErrorHandlerBeanName();
if (StringUtils.hasText(commonErrorHandlerBeanName)) {
final CommonErrorHandler commonErrorHandler = getApplicationContext().getBean(commonErrorHandlerBeanName,
CommonErrorHandler.class);
messageListenerContainer.setCommonErrorHandler(commonErrorHandler);
}
if (customizer instanceof ListenerContainerWithDlqAndRetryCustomizer) {
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver = createDestResolver(
extendedConsumerProperties.getExtension());
BackOff createBackOff = extendedConsumerProperties.getMaxAttempts() > 1
? createBackOff(extendedConsumerProperties)
: null;
((ListenerContainerWithDlqAndRetryCustomizer) customizer)
.configure(messageListenerContainer, destination.getName(), consumerGroup, destinationResolver,
createBackOff);
}
else {
((ListenerContainerCustomizer<Object>) customizer)
.configure(messageListenerContainer, destination.getName(), consumerGroup);
}
this.ackModeInfo.put(destination, messageListenerContainer.getContainerProperties().getAckMode());
return kafkaMessageDrivenChannelAdapter;
}
private BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> createDestResolver(
KafkaConsumerProperties extension) {
Integer dlqPartitions = extension.getDlqPartitions();
if (extension.isEnableDlq()) {
return (rec, ex) -> dlqPartitions == null || dlqPartitions > 1
? new TopicPartition(extension.getDlqName(), rec.partition())
: new TopicPartition(extension.getDlqName(), 0);
}
else {
return null;
}
}
/**
* Configure a {@link BackOff} for the after rollback processor, based on the consumer
* retry properties. If retry is disabled, return a {@link BackOff} that disables
* retry. Otherwise calculate the {@link ExponentialBackOff#setMaxElapsedTime(long)}
* so that the {@link BackOff} stops after the configured
* {@link ExtendedConsumerProperties#getMaxAttempts()}.
* retry. Otherwise use an {@link ExponentialBackOffWithMaxRetries}.
* @param extendedConsumerProperties the properties.
* @return the backoff.
*/
@@ -795,20 +842,10 @@ public class KafkaMessageChannelBinder extends
return new FixedBackOff(0L, 0L);
}
int initialInterval = extendedConsumerProperties.getBackOffInitialInterval();
double multiplier = extendedConsumerProperties.getBackOffMultiplier();
int maxInterval = extendedConsumerProperties.getBackOffMaxInterval();
ExponentialBackOff backOff = new ExponentialBackOff(initialInterval, multiplier);
ExponentialBackOff backOff = new ExponentialBackOffWithMaxRetries(maxAttempts - 1);
backOff.setInitialInterval(initialInterval);
backOff.setMaxInterval(maxInterval);
long maxElapsed = extendedConsumerProperties.getBackOffInitialInterval();
double accum = maxElapsed;
for (int i = 1; i < maxAttempts - 1; i++) {
accum = accum * multiplier;
if (accum > maxInterval) {
accum = maxInterval;
}
maxElapsed += accum;
}
backOff.setMaxElapsedTime(maxElapsed);
return backOff;
}
@@ -1621,9 +1658,9 @@ public class KafkaMessageChannelBinder extends
key, value, headers);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key), 50))
.append(keyOrValue(key))
.append("'").append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value), 50))
.append(keyOrValue(value))
.append("'").append(" received from ")
.append(consumerRecord.partition());
ListenableFuture<SendResult<K, V>> sentDlq = null;
@@ -1663,9 +1700,16 @@ public class KafkaMessageChannelBinder extends
messageHeaders.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class).acknowledge();
}
}
}
private String keyOrValue(Object keyOrValue) {
if (keyOrValue instanceof byte[]) {
return "byte[" + ((byte[]) keyOrValue).length + "]";
}
else {
return toDisplayString(ObjectUtils.nullSafeToString(keyOrValue), 50);
}
}
}
}

View File

@@ -0,0 +1,71 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.function.BiFunction;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.lang.Nullable;
import org.springframework.util.backoff.BackOff;
/**
* An extension of {@link ListenerContainerCustomizer} that provides access to dead letter
* metadata.
*
* @author Gary Russell
* @since 3.2
*
*/
public interface ListenerContainerWithDlqAndRetryCustomizer
extends ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> {
@Override
default void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group) {
}
/**
* Configure the container.
* @param container the container.
* @param destinationName the destination name.
* @param group the group.
* @param dlqDestinationResolver a destination resolver for the dead letter topic (if
* enableDlq).
* @param backOff the backOff using retry properties (if configured).
* @see #retryAndDlqInBinding(String, String)
*/
void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName, String group,
@Nullable BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff);
/**
* Return false to move retries and DLQ from the binding to a customized error handler
* using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when
* configured via
* {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}.
* @param destinationName the destination name.
* @param group the group.
* @return true to disable retrie in the binding
*/
default boolean retryAndDlqInBinding(String destinationName, String group) {
return true;
}
}

View File

@@ -40,10 +40,6 @@ public class ExtendedBindingHandlerMappingsProviderConfiguration {
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.bindings"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.default"));
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams"),
ConfigurationPropertyName
.of("spring.cloud.stream.kafka.streams.default"));
return mappings;
};
}

View File

@@ -22,7 +22,6 @@ import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
@@ -81,22 +80,12 @@ import org.springframework.messaging.converter.MessageConverter;
* @author Artem Bilan
* @author Aldo Sinanaj
*/
@Configuration
@Configuration(proxyBeanMethods = false)
@ConditionalOnMissingBean(Binder.class)
@Import({ KafkaAutoConfiguration.class, KafkaBinderHealthIndicatorConfiguration.class })
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
public class KafkaBinderConfiguration {
@Autowired
private KafkaExtendedBindingProperties kafkaExtendedBindingProperties;
@SuppressWarnings("rawtypes")
@Autowired
private ProducerListener producerListener;
@Autowired
private KafkaProperties kafkaProperties;
@Bean
KafkaBinderConfigurationProperties configurationProperties(
KafkaProperties kafkaProperties) {
@@ -106,12 +95,12 @@ public class KafkaBinderConfiguration {
@Bean
KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties configurationProperties,
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer, KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(configurationProperties,
this.kafkaProperties, adminClientConfigCustomizer.getIfUnique());
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@SuppressWarnings("unchecked")
@SuppressWarnings({"rawtypes", "unchecked"})
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder(
KafkaBinderConfigurationProperties configurationProperties,
@@ -125,16 +114,17 @@ public class KafkaBinderConfiguration {
ObjectProvider<DlqDestinationResolver> dlqDestinationResolver,
ObjectProvider<ClientFactoryCustomizer> clientFactoryCustomizer,
ObjectProvider<ConsumerConfigCustomizer> consumerConfigCustomizer,
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer,
ProducerListener producerListener, KafkaExtendedBindingProperties kafkaExtendedBindingProperties
) {
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
configurationProperties, provisioningProvider,
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique(),
dlqPartitionFunction.getIfUnique(), dlqDestinationResolver.getIfUnique());
kafkaMessageChannelBinder.setProducerListener(this.producerListener);
kafkaMessageChannelBinder.setProducerListener(producerListener);
kafkaMessageChannelBinder
.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
.setExtendedBindingProperties(kafkaExtendedBindingProperties);
kafkaMessageChannelBinder.setProducerMessageHandlerCustomizer(messageHandlerCustomizer);
kafkaMessageChannelBinder.setConsumerEndpointCustomizer(consumerCustomizer);
kafkaMessageChannelBinder.setClientFactoryCustomizer(clientFactoryCustomizer.getIfUnique());

View File

@@ -1,55 +0,0 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.junit.jupiter.api.extension.AfterEachCallback;
import org.junit.jupiter.api.extension.BeforeEachCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
/**
*
* @author Oleg Zhurakousky
*
*/
public class EmbeddedKafkaRuleExtension extends EmbeddedKafkaRule implements BeforeEachCallback, AfterEachCallback {
public EmbeddedKafkaRuleExtension(int count, boolean controlledShutdown,
String... topics) {
super(count, controlledShutdown, topics);
}
public EmbeddedKafkaRuleExtension(int count, boolean controlledShutdown,
int partitions, String... topics) {
super(count, controlledShutdown, partitions, topics);
}
public EmbeddedKafkaRuleExtension(int count) {
super(count);
}
@Override
public void afterEach(ExtensionContext context) throws Exception {
this.after();
}
@Override
public void beforeEach(ExtensionContext context) throws Exception {
this.before();
}
}

View File

@@ -108,8 +108,8 @@ public class KafkaBinderHealthIndicatorTest {
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, true);
mockContainer(listenerContainerB, true);
mockContainer(listenerContainerA, true, true);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
@@ -127,8 +127,27 @@ public class KafkaBinderHealthIndicatorTest {
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, false);
mockContainer(listenerContainerB, true);
mockContainer(listenerContainerA, false, true);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
assertThat(health.getDetails()).containsEntry("topicsInUse", singleton(TEST_TOPIC));
assertThat(health.getDetails()).hasEntrySatisfying("listenerContainers", value ->
assertThat((ArrayList<?>) value).hasSize(2));
}
@Test
public void kafkaBinderIsDownWhenOneOfContainersWasStoppedAbnormally() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation(
"group1-healthIndicator", partitions, false));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC))
.willReturn(partitions);
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
mockContainer(listenerContainerA, false, false);
mockContainer(listenerContainerB, true, true);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
@@ -137,11 +156,14 @@ public class KafkaBinderHealthIndicatorTest {
assertThat((ArrayList<?>) value).hasSize(2));
}
private void mockContainer(AbstractMessageListenerContainer<?, ?> container, boolean isRunning) {
private void mockContainer(AbstractMessageListenerContainer<?, ?> container, boolean isRunning,
boolean normalState) {
org.mockito.BDDMockito.given(container.isRunning()).willReturn(isRunning);
org.mockito.BDDMockito.given(container.isContainerPaused()).willReturn(true);
org.mockito.BDDMockito.given(container.getListenerId()).willReturn("someListenerId");
org.mockito.BDDMockito.given(container.getGroupId()).willReturn("someGroupId");
org.mockito.BDDMockito.given(container.isInExpectedState()).willReturn(normalState);
}
@Test

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2020 the original author or authors.
* Copyright 2016-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -89,9 +89,12 @@ public class KafkaBinderMetricsTest {
@Test
public void shouldIndicateLag() {
final Map<TopicPartition, OffsetAndMetadata> committed = new HashMap<>();
TopicPartition topicPartition = new TopicPartition(TEST_TOPIC, 0);
committed.put(topicPartition, new OffsetAndMetadata(500));
org.mockito.BDDMockito
.given(consumer.committed(ArgumentMatchers.any(TopicPartition.class)))
.willReturn(new OffsetAndMetadata(500));
.given(consumer.committed(ArgumentMatchers.anySet()))
.willReturn(committed);
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC,
new TopicInformation("group1-metrics", partitions, false));
@@ -133,9 +136,14 @@ public class KafkaBinderMetricsTest {
org.mockito.BDDMockito
.given(consumer.endOffsets(ArgumentMatchers.anyCollection()))
.willReturn(endOffsets);
final Map<TopicPartition, OffsetAndMetadata> committed = new HashMap<>();
TopicPartition topicPartition1 = new TopicPartition(TEST_TOPIC, 0);
TopicPartition topicPartition2 = new TopicPartition(TEST_TOPIC, 1);
committed.put(topicPartition1, new OffsetAndMetadata(500));
committed.put(topicPartition2, new OffsetAndMetadata(500));
org.mockito.BDDMockito
.given(consumer.committed(ArgumentMatchers.any(TopicPartition.class)))
.willReturn(new OffsetAndMetadata(500));
.given(consumer.committed(ArgumentMatchers.anySet()))
.willReturn(committed);
List<PartitionInfo> partitions = partitions(new Node(0, null, 0),
new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC,

View File

@@ -66,11 +66,11 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.assertj.core.api.Assertions;
import org.assertj.core.api.Condition;
import org.awaitility.Awaitility;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInfo;
import org.junit.jupiter.api.extension.RegisterExtension;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.cloud.stream.binder.Binder;
@@ -97,6 +97,7 @@ import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.provisioning.ProvisioningException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.GenericApplicationContext;
@@ -117,8 +118,11 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.listener.MessageListenerContainer;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
@@ -126,8 +130,10 @@ import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionOffset;
import org.springframework.kafka.support.converter.BatchMessagingMessageConverter;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
@@ -142,6 +148,7 @@ import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeTypeUtils;
import org.springframework.util.backoff.FixedBackOff;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
@@ -156,29 +163,28 @@ import static org.mockito.Mockito.mock;
* @author Henryk Konsek
* @author Gary Russell
*/
@EmbeddedKafka(count = 1, controlledShutdown = true, topics = "error.pollableDlq.group-pcWithDlq", brokerProperties = {"transaction.state.log.replication.factor=1",
"transaction.state.log.min.isr=1"})
public class KafkaBinderTests extends
// @checkstyle:off
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
// @checkstyle:on
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
// @RegisterExtension
// public ExpectedException expectedProvisioningException = ExpectedException.none();
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class
.getSimpleName();
@RegisterExtension
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRuleExtension(1, true, 10,
"error.pollableDlq.group-pcWithDlq")
.brokerProperty("transaction.state.log.replication.factor", "1")
.brokerProperty("transaction.state.log.min.isr", "1");
private KafkaTestBinder binder;
private AdminClient adminClient;
private static EmbeddedKafkaBroker embeddedKafka;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
}
@Override
protected ExtendedConsumerProperties<KafkaConsumerProperties> createConsumerProperties() {
final ExtendedConsumerProperties<KafkaConsumerProperties> kafkaConsumerProperties = new ExtendedConsumerProperties<>(
@@ -248,8 +254,8 @@ public class KafkaBinderTests extends
private KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties(
new TestKafkaProperties());
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -283,8 +289,7 @@ public class KafkaBinderTests extends
timeoutMultiplier = Double.parseDouble(multiplier);
}
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -730,7 +735,6 @@ public class KafkaBinderTests extends
@Test
@SuppressWarnings("unchecked")
@Disabled
public void testDlqWithNativeSerializationEnabledOnDlqProducer() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
@@ -789,12 +793,10 @@ public class KafkaBinderTests extends
.withPayload("foo").build();
moduleOutputChannel.send(message);
Message<?> receivedMessage = receive(dlqChannel, 5);
assertThat(receivedMessage).isNotNull();
assertThat(receivedMessage.getPayload()).isEqualTo("foo".getBytes());
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
Awaitility.await().until(() -> handler.getInvocationCount() == consumerProperties.getMaxAttempts());
assertThat(receivedMessage.getHeaders()
.get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo("foo.bar".getBytes(StandardCharsets.UTF_8));
@@ -1052,7 +1054,7 @@ public class KafkaBinderTests extends
AbstractMessageListenerContainer container = TestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
assertThat(container.getContainerProperties().getTopicPartitionsToAssign().length)
assertThat(container.getContainerProperties().getTopicPartitions().length)
.isEqualTo(4); // 2 topics 2 partitions each
if (transactional) {
assertThat(TestUtils.getPropertyValue(container.getAfterRollbackProcessor(), "kafkaTemplate")).isNotNull();
@@ -1063,7 +1065,7 @@ public class KafkaBinderTests extends
String dlqTopic = useDlqDestResolver ? "foo.dlq" : "error.dlqTest." + uniqueBindingId + ".0.testGroup";
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
if (useDlqDestResolver) {
List<NewTopic> nonProvisionedDlqTopics = new ArrayList<>();
NewTopic nTopic = new NewTopic(dlqTopic, 3, (short) 1);
@@ -1300,6 +1302,113 @@ public class KafkaBinderTests extends
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testRetriesWithoutDlq() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
// Since we don't have a DLQ, assert that we are invoking the handler exactly the same number of times
// as set in consumerProperties.maxAttempt and not the default set by Spring Kafka (10 times).
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testCommonErrorHandlerBeanNameOnConsumerBinding() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
CountDownLatch latch = new CountDownLatch(1);
CommonErrorHandler commonErrorHandler = new DefaultErrorHandler(new FixedBackOff(0L, 0L)) {
@Override
public void handleRemaining(Exception thrownException, List<ConsumerRecord<?, ?>> records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
super.handleRemaining(thrownException, records, consumer, container);
latch.countDown();
}
};
ConfigurableApplicationContext context = TestUtils.getPropertyValue(binder,
"binder.applicationContext", ConfigurableApplicationContext.class);
context.getBeanFactory().registerSingleton("fooCommonErrorHandler", commonErrorHandler);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
consumerProperties.getExtension().setCommonErrorHandlerBeanName("fooCommonErrorHandler");
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
//Assertions for the CommonErrorHandler configured on the consumer binding (commonErrorHandlerBeanName).
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer.commonErrorHandler")).isSameAs(commonErrorHandler);
latch.await(10, TimeUnit.SECONDS);
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870 for motivation for this test.
@Test
@SuppressWarnings("unchecked")
@@ -2878,7 +2987,6 @@ public class KafkaBinderTests extends
@Test
@SuppressWarnings("unchecked")
@Disabled
public void testAutoAddPartitionsDisabledFailsIfTopicUnderPartitionedAndAutoRebalanceDisabled()
throws Throwable {
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
@@ -2897,14 +3005,13 @@ public class KafkaBinderTests extends
consumerProperties.setInstanceCount(3);
consumerProperties.setInstanceIndex(2);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
// expectedProvisioningException.expect(ProvisioningException.class);
// expectedProvisioningException.expectMessage(
// "The number of expected partitions was: 3, but 1 has been found instead");
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
Assertions.assertThatThrownBy(() -> {
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
}).isInstanceOf(ProvisioningException.class);
}
@Test
@@ -2936,7 +3043,7 @@ public class KafkaBinderTests extends
binding,
"lifecycle.messageListenerContainer.containerProperties",
ContainerProperties.class);
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitionsToAssign();
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitions();
assertThat(listenedPartitions).hasSize(2);
assertThat(listenedPartitions).contains(
new TopicPartitionOffset(testTopicName, 2),
@@ -3299,7 +3406,7 @@ public class KafkaBinderTests extends
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(
"testSendAndReceiveWithMixedMode", "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class);
@@ -3344,7 +3451,7 @@ public class KafkaBinderTests extends
"pollable,anotherOne", "group-polledConsumer", inboundBindTarget,
consumerProps);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollable", "testPollable");
@@ -3395,7 +3502,7 @@ public class KafkaBinderTests extends
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableRequeue", "group", inboundBindTarget, properties);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollableRequeue", "testPollable");
@@ -3431,7 +3538,7 @@ public class KafkaBinderTests extends
properties.setBackOffInitialInterval(0);
properties.getExtension().setEnableDlq(true);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka.getEmbeddedKafka());
.producerProps(embeddedKafka);
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableDlq", "group-pcWithDlq", inboundBindTarget, properties);
KafkaTemplate template = new KafkaTemplate(
@@ -3450,11 +3557,11 @@ public class KafkaBinderTests extends
assertThat(e.getCause().getMessage()).isEqualTo("test DLQ");
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory cf = new DefaultKafkaConsumerFactory<>(consumerProps);
Consumer consumer = cf.createConsumer();
embeddedKafka.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer,
embeddedKafka.consumeFromAnEmbeddedTopic(consumer,
"error.pollableDlq.group-pcWithDlq");
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer,
"error.pollableDlq.group-pcWithDlq");
@@ -3469,7 +3576,7 @@ public class KafkaBinderTests extends
public void testTopicPatterns() throws Exception {
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("topicPatterns.1", 1, (short) 1))).all()
.get();
@@ -3488,7 +3595,7 @@ public class KafkaBinderTests extends
"topicPatterns\\..*", "testTopicPatterns", moduleInputChannel,
consumerProperties);
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(
KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka()));
KafkaTestUtils.producerProps(embeddedKafka));
KafkaTemplate template = new KafkaTemplate(pf);
template.send("topicPatterns.1", "foo");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
@@ -3499,11 +3606,11 @@ public class KafkaBinderTests extends
}
@Test
@Disabled
public void testSameTopicCannotBeProvisionedAgain() throws Throwable {
CountDownLatch latch = new CountDownLatch(1);
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
embeddedKafka.getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1))).all()
.get();
@@ -3515,8 +3622,9 @@ public class KafkaBinderTests extends
}
catch (Exception ex) {
assertThat(ex.getCause() instanceof TopicExistsException).isTrue();
throw ex.getCause();
latch.countDown();
}
latch.await(1, TimeUnit.SECONDS);
}
}
@@ -3718,7 +3826,7 @@ public class KafkaBinderTests extends
input.setBeanName(name + ".in");
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
Binding<MessageChannel> consumerBinding = binder.bindConsumer(name + ".0", name, input, consumerProperties);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send(MessageBuilder.withPayload("internalHeaderPropagation")
.setHeader(KafkaHeaders.TOPIC, name + ".0")
@@ -3732,7 +3840,7 @@ public class KafkaBinderTests extends
output.send(consumed);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(name, "false",
embeddedKafka.getEmbeddedKafka());
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2019 the original author or authors.
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,20 +16,28 @@
package org.springframework.cloud.stream.binder.kafka.bootstrap;
import java.util.Map;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.junit.jupiter.api.Disabled;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.cloud.stream.binder.kafka.EmbeddedKafkaRuleExtension;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatCode;
@@ -37,21 +45,53 @@ import static org.assertj.core.api.Assertions.assertThatCode;
/**
* @author Soby Chacko
*/
@Disabled
@EmbeddedKafka(count = 1, controlledShutdown = true, partitions = 10, topics = "outputTopic")
public class KafkaBinderMeterRegistryTest {
@RegisterExtension
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRuleExtension(1, true, 10);
private static EmbeddedKafkaBroker embeddedKafka;
private static Consumer<String, String> consumer;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "outputTopic");
}
@AfterAll
public static void tearDown() {
consumer.close();
}
@Test
public void testMetricsWithSingleBinder() {
public void testMetricsWithSingleBinder() throws Exception {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.uppercase-in-0.destination=inputTopic",
"--spring.cloud.stream.bindings.uppercase-in-0.group=inputGroup",
"--spring.cloud.stream.bindings.uppercase-out-0.destination=outputTopic",
"--spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -71,10 +111,22 @@ public class KafkaBinderMeterRegistryTest {
"--spring.cloud.stream.binders.kafka2.type=kafka",
"--spring.cloud.stream.binders.kafka1.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -90,10 +142,10 @@ public class KafkaBinderMeterRegistryTest {
.tag("topic", "inputTopic").gauge().value()).isNotNull();
// assert consumer metrics
assertThatCode(() -> meterRegistry.get("kafka.consumer.connection.count").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.consumer.fetch.manager.fetch.total").meter()).doesNotThrowAnyException();
// assert producer metrics
assertThatCode(() -> meterRegistry.get("kafka.producer.connection.count").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.producer.io.ratio").meter()).doesNotThrowAnyException();
}
@SpringBootApplication

View File

@@ -0,0 +1,117 @@
/*
* Copyright 2021-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.integration;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.kafka.ListenerContainerWithDlqAndRetryCustomizer;
import org.springframework.cloud.stream.binding.BindingsLifecycleController;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.KafkaOperations;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.support.ExponentialBackOffWithMaxRetries;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.lang.Nullable;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.util.backoff.BackOff;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
/**
* @author Gary Russell
*/
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.function.definition=retryInBinder;retryInContainer",
"spring.cloud.stream.bindings.retryInBinder-in-0.group=foo",
"spring.cloud.stream.bindings.retryInContainer-in-0.group=bar",
"spring.cloud.stream.kafka.bindings.retryInBinder-in-0.consumer.enable-dlq=true",
"spring.cloud.stream.kafka.bindings.retryInContainer-in-0.consumer.enable-dlq=true"})
@EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
@DirtiesContext
public class KafkaRetryDlqBinderOrContainerTests {
@Test
public void retryAndDlqInRightPlace(@Autowired BindingsLifecycleController controller) {
Binding<?> retryInBinder = controller.queryState("retryInBinder-in-0");
assertThat(KafkaTestUtils.getPropertyValue(retryInBinder, "lifecycle.retryTemplate")).isNotNull();
assertThat(KafkaTestUtils.getPropertyValue(retryInBinder,
"lifecycle.messageListenerContainer.commonErrorHandler")).isNull();
Binding<?> retryInContainer = controller.queryState("retryInContainer-in-0");
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer, "lifecycle.retryTemplate")).isNull();
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer,
"lifecycle.messageListenerContainer.commonErrorHandler")).isInstanceOf(CommonErrorHandler.class);
assertThat(KafkaTestUtils.getPropertyValue(retryInContainer,
"lifecycle.messageListenerContainer.commonErrorHandler.failureTracker.backOff"))
.isInstanceOf(ExponentialBackOffWithMaxRetries.class);
}
@SpringBootApplication
public static class ConfigCustomizerTestConfig {
@Bean
public Consumer<String> retryInBinder() {
return str -> { };
}
@Bean
public Consumer<String> retryInContainer() {
return str -> { };
}
@Bean
ListenerContainerWithDlqAndRetryCustomizer cust() {
return new ListenerContainerWithDlqAndRetryCustomizer() {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> dlqDestinationResolver,
@Nullable BackOff backOff) {
if (destinationName.contains("Container")) {
ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(mock(KafkaOperations.class),
dlqDestinationResolver);
container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff));
}
}
@Override
public boolean retryAndDlqInBinding(String destinationName, String group) {
return !destinationName.contains("Container");
}
};
}
}
}