Compare commits
37 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea8912b011 | ||
|
|
d76d916970 | ||
|
|
ac0e462ed2 | ||
|
|
bd1b49222c | ||
|
|
9fd16416d6 | ||
|
|
a4c38b3453 | ||
|
|
a8c948a6b2 | ||
|
|
d57091d791 | ||
|
|
2b7f6ecb96 | ||
|
|
53d32c2332 | ||
|
|
b7ebc185e7 | ||
|
|
56e25383f8 | ||
|
|
b4c7c36229 | ||
|
|
6eed115cc9 | ||
|
|
8e6d07cc7b | ||
|
|
adde49aab3 | ||
|
|
e500138486 | ||
|
|
970bac41bb | ||
|
|
afe39bf78a | ||
|
|
c1ad3006e9 | ||
|
|
ba2c3a05c9 | ||
|
|
739b499966 | ||
|
|
1b26f5d629 | ||
|
|
99c323e314 | ||
|
|
f4d3715317 | ||
|
|
d7907bbdcc | ||
|
|
9b5e735f74 | ||
|
|
201668542b | ||
|
|
a5f01f9d6f | ||
|
|
912c47e3ac | ||
|
|
001882de4e | ||
|
|
54ac274ea3 | ||
|
|
80b707e5e9 | ||
|
|
13474bdafb | ||
|
|
d0b4bdf438 | ||
|
|
0bedc606ce | ||
|
|
b5cb32767b |
78
README.adoc
78
README.adoc
@@ -239,7 +239,7 @@ Note that this property is only applicable for pollable consumers.
|
||||
Default: not set.
|
||||
resetOffsets::
|
||||
Whether to reset offsets on the consumer to the value provided by startOffset.
|
||||
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
|
||||
Must be false if a `KafkaBindingRebalanceListener` is provided; see <<rebalance-listener>>.
|
||||
See <<reset-offsets>> for more information about this property.
|
||||
+
|
||||
Default: `false`.
|
||||
@@ -337,6 +337,17 @@ Usually needed if you want to synchronize another transaction with the Kafka tra
|
||||
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
|
||||
+
|
||||
Default: none.
|
||||
txCommitRecovered::
|
||||
When using a transactional binder, the offset of a recovered record (e.g. when retries are exhausted and the record is sent to a dead letter topic) will be committed via a new transaction, by default.
|
||||
Setting this property to `false` suppresses committing the offset of recovered record.
|
||||
+
|
||||
Default: true.
|
||||
commonErrorHandlerBeanName::
|
||||
`CommonErrorHandler` bean name to use per consumer binding.
|
||||
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
|
||||
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[reset-offsets]]
|
||||
==== Resetting Offsets
|
||||
@@ -363,7 +374,7 @@ Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will per
|
||||
|
||||
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
|
||||
|
||||
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets: true` is ignored.
|
||||
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets` should not be set to `true`, otherwise, that will cause an error.
|
||||
|
||||
==== Consuming Batches
|
||||
|
||||
@@ -458,18 +469,18 @@ Default: none (the binder-wide default of -1 is used).
|
||||
useTopicHeader::
|
||||
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
|
||||
If the header is not present, the default binding destination is used.
|
||||
Default: `false`.
|
||||
+
|
||||
Default: `false`.
|
||||
recordMetadataChannel::
|
||||
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
|
||||
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
|
||||
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
|
||||
|
||||
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
|
||||
|
||||
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
|
||||
Default: null
|
||||
+
|
||||
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
|
||||
+
|
||||
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
|
||||
+
|
||||
Default: null.
|
||||
|
||||
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
|
||||
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
|
||||
@@ -506,11 +517,11 @@ Default: `false`
|
||||
|
||||
In this section, we show the use of the preceding properties for specific scenarios.
|
||||
|
||||
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
|
||||
===== Example: Setting `ackMode` to `MANUAL` and Relying on Manual Acknowledgement
|
||||
|
||||
This example illustrates how one may manually acknowledge offsets in a consumer application.
|
||||
|
||||
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
|
||||
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.ackMode` be set to `MANUAL`.
|
||||
Use the corresponding input channel name for your example.
|
||||
|
||||
[source]
|
||||
@@ -622,6 +633,47 @@ Usually, applications may use principals that do not have administrative rights
|
||||
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
|
||||
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
|
||||
|
||||
====== Multi-binder configuration and JAAS
|
||||
|
||||
When connecting to multiple clusters in which each one requires separate JAAS configuration, then set the JAAS configuration using the property `sasl.jaas.config`.
|
||||
When this property is present in the applicaiton, it takes precedence over the other strategies mentioned above.
|
||||
See this https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients[KIP-85] for more details.
|
||||
|
||||
For example, if you have two clusters in your application with separate JAAS configuration, then the following is a template that you can use:
|
||||
|
||||
```
|
||||
spring.cloud.stream:
|
||||
binders:
|
||||
kafka1:
|
||||
type: kafka
|
||||
environment:
|
||||
spring:
|
||||
cloud:
|
||||
stream:
|
||||
kafka:
|
||||
binder:
|
||||
brokers: localhost:9092
|
||||
configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\";"
|
||||
kafka2:
|
||||
type: kafka
|
||||
environment:
|
||||
spring:
|
||||
cloud:
|
||||
stream:
|
||||
kafka:
|
||||
binder:
|
||||
brokers: localhost:9093
|
||||
configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"user1\" password=\"user1-secret\";"
|
||||
kafka.binder:
|
||||
configuration:
|
||||
security.protocol: SASL_PLAINTEXT
|
||||
sasl.mechanism: PLAIN
|
||||
```
|
||||
|
||||
Note that both the Kafka clusters, and the `sasl.jaas.config` values for each of them are different in the above configuration.
|
||||
|
||||
See this https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples/kafka-multi-binder-jaas[sample application] for more details on how to setup and run such an application.
|
||||
|
||||
[[pause-resume]]
|
||||
===== Example: Pausing and Resuming the Consumer
|
||||
|
||||
@@ -774,10 +826,10 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
|
||||
====
|
||||
|
||||
[[rebalance-listener]]
|
||||
=== Using a KafkaRebalanceListener
|
||||
=== Using a KafkaBindingRebalanceListener
|
||||
|
||||
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
|
||||
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
|
||||
Starting with version 2.1, if you provide a single `KafkaBindingRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
|
||||
|
||||
====
|
||||
[source, java]
|
||||
@@ -830,7 +882,7 @@ You cannot set the `resetOffsets` consumer property to `true` when you provide a
|
||||
If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka,
|
||||
you can implement the following customizers.
|
||||
|
||||
* ConsusumerConfigCustomizer
|
||||
* ConsumerConfigCustomizer
|
||||
* ProducerConfigCustomizer
|
||||
|
||||
Both of these interfaces provide a way to configure the config map used for consumer and producer properties.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
</parent>
|
||||
<packaging>jar</packaging>
|
||||
<name>spring-cloud-stream-binder-kafka-docs</name>
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
|spring.cloud.stream.dynamic-destinations | `[]` | A list of destinations that can be bound dynamically. If set, only listed destinations can be bound.
|
||||
|spring.cloud.stream.function.batch-mode | `false` |
|
||||
|spring.cloud.stream.function.bindings | |
|
||||
|spring.cloud.stream.function.definition | | Definition of functions to bind. If several functions need to be composed into one, use pipes (e.g., 'fooFunc\|barFunc')
|
||||
|spring.cloud.stream.instance-count | `1` | The number of deployed instances of an application. Default: 1. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-count" where 'foo' is the name of the binding.
|
||||
|spring.cloud.stream.instance-index | `0` | The instance id of the application: a number from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index" where 'foo' is the name of the binding.
|
||||
|spring.cloud.stream.instance-index-list | | A list of instance id's from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index-list" where 'foo' is the name of the binding. This setting will override the one set in 'spring.cloud.stream.instance-index'
|
||||
|
||||
@@ -277,7 +277,7 @@ public Function<KTable<String, String>, KStream<String, String>> bar() {
|
||||
|
||||
===== Multiple Output Bindings
|
||||
|
||||
Kafka Streams allows to write outbound data into multiple topics. This feature is known as branching in Kafka Streams.
|
||||
Kafka Streams allows writing outbound data into multiple topics. This feature is known as branching in Kafka Streams.
|
||||
When using multiple output bindings, you need to provide an array of KStream (`KStream[]`) as the outbound return type.
|
||||
|
||||
Here is an example:
|
||||
@@ -291,21 +291,30 @@ public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
|
||||
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
|
||||
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
|
||||
|
||||
return input -> input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.groupBy((key, value) -> value)
|
||||
.windowedBy(TimeWindows.of(5000))
|
||||
.count(Materialized.as("WordCounts-branch"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))))
|
||||
.branch(isEnglish, isFrench, isSpanish);
|
||||
return input -> {
|
||||
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.groupBy((key, value) -> value)
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
|
||||
.count(Materialized.as("WordCounts-branch"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))))
|
||||
.split()
|
||||
.branch(isEnglish)
|
||||
.branch(isFrench)
|
||||
.branch(isSpanish)
|
||||
.noDefaultBranch();
|
||||
|
||||
return stringKStreamMap.values().toArray(new KStream[0]);
|
||||
};
|
||||
}
|
||||
----
|
||||
|
||||
The programming model remains the same, however the outbound parameterized type is `KStream[]`.
|
||||
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively.
|
||||
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array.
|
||||
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively for the function above.
|
||||
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array as three.
|
||||
Note that in this example, we provide a `noDefaultBranch()`; if we have used `defaultBranch()` instead, that would have required an extra output binding, essentially returning a `KStream` array of length four.
|
||||
|
||||
===== Summary of Function based Programming Styles for Kafka Streams
|
||||
|
||||
@@ -817,7 +826,7 @@ It will use that for inbound deserialization.
|
||||
|
||||
```
|
||||
@Bean
|
||||
public Serde<Foo> customSerde() {
|
||||
public Serde<Foo() customSerde{
|
||||
...
|
||||
}
|
||||
|
||||
@@ -1252,7 +1261,7 @@ ReadOnlyKeyValueStore<Object, Object> keyValueStore =
|
||||
----
|
||||
|
||||
During the startup, the above method call to retrieve the store might fail.
|
||||
For example, it might still be in the middle of initializing the state store.
|
||||
For e.g it might still be in the middle of initializing the state store.
|
||||
In such cases, it will be useful to retry this operation.
|
||||
Kafka Streams binder provides a simple retry mechanism to accommodate this.
|
||||
|
||||
@@ -1287,10 +1296,6 @@ else {
|
||||
}
|
||||
----
|
||||
|
||||
For more information on these host finding methods, please see the Javadoc on the methods.
|
||||
For these methods also, during startup, if the underlying KafkaStreams objects are not ready, they might throw exceptions.
|
||||
The aforementioned retry properties are applicable for these methods as well.
|
||||
|
||||
==== Other API methods available through the InteractiveQueryService
|
||||
|
||||
Use the following API method to retrieve the `KeyQueryMetadata` object associated with the combination of given store and key.
|
||||
@@ -1713,8 +1718,9 @@ spring.cloud.stream.bindings.enrichOrder-out-0.binder=kafka1 #kstream
|
||||
|
||||
=== State Cleanup
|
||||
|
||||
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
|
||||
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
|
||||
By default, no local state is cleaned up when the binding is stopped.
|
||||
This is the same behavior effective from Spring Kafka version 2.7.
|
||||
See https://docs.spring.io/spring-kafka/reference/html/#streams-config[Spring Kafka documentation] for more details.
|
||||
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
|
||||
|
||||
=== Kafka Streams topology visualization
|
||||
@@ -1874,6 +1880,102 @@ When there are multiple bindings present on a single function, invoking these op
|
||||
This is because all the bindings on a single function are backed by the same `StreamsBuilderFactoryBean`.
|
||||
Therefore, for the function above, either `function-in-0` or `function-out-0` will work.
|
||||
|
||||
=== Manually starting Kafka Streams processors
|
||||
|
||||
Spring Cloud Stream Kafka Streams binder offers an abstraction called `StreamsBuilderFactoryManager` on top of the `StreamsBuilderFactoryBean` from Spring for Apache Kafka.
|
||||
This manager API is used for controlling the multiple `StreamsBuilderFactoryBean` per processor in a binder based application.
|
||||
Therefore, when using the binder, if you manually want to control the auto starting of the various `StreamsBuilderFactoryBean` objects in the application, you need to use `StreamsBuilderFactoryManager`.
|
||||
You can use the property `spring.kafka.streams.auto-startup` and set this to `false` in order to turn off auto starting of the processors.
|
||||
Then, in the application, you can use something as below to start the processors using `StreamsBuilderFactoryManager`.
|
||||
|
||||
```
|
||||
@Bean
|
||||
public ApplicationRunner runner(StreamsBuilderFactoryManager sbfm) {
|
||||
return args -> {
|
||||
sbfm.start();
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
This feature is handy, when you want your application to start in the main thread and let Kafka Streams processors start separately.
|
||||
For example, when you have a large state store that needs to be restored, if the processors are started normally as is the default case, this may block your application to start.
|
||||
If you are using some sort of liveness probe mechanism (for example on Kubernetes), it may think that the application is down and attempt a restart.
|
||||
In order to correct this, you can set `spring.kafka.streams.auto-startup` to `false` and follow the approach above.
|
||||
|
||||
Keep in mind that, when using the Spring Cloud Stream binder, you are not directly dealing with `StreamsBuilderFactoryBean` from Spring for Apache Kafka, rather `StreamsBuilderFactoryManager`, as the `StreamsBuilderFactoryBean` objects are internally managed by the binder.
|
||||
|
||||
=== Manually starting Kafka Streams processors selectively
|
||||
|
||||
While the approach laid out above will unconditionally apply auto start `false` to all the Kafka Streams processors in the application through `StreamsBuilderFactoryManager`, it is often desirable that only individually selected Kafka Streams processors are not auto started.
|
||||
For instance, let us assume that you have three different functions (processors) in your application and for one of the processors, you do not want to start it as part of the application startup.
|
||||
Here is an example of such a situation.
|
||||
|
||||
```
|
||||
|
||||
@Bean
|
||||
public Function<KStream<?, ?>, KStream<?, ?>> process1() {
|
||||
|
||||
}
|
||||
|
||||
@Bean
|
||||
public Consumer<KStream<?, ?>> process2() {
|
||||
|
||||
}
|
||||
|
||||
@Bean
|
||||
public BiFunction<KStream<?, ?>, KTable<?, ?>, KStream<?, ?>> process3() {
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
In this scenario above, if you set `spring.kafka.streams.auto-startup` to `false`, then none of the processors will auto start during the application startup.
|
||||
In that case, you have to programmatically start them as described above by calling `start()` on the underlying `StreamsBuilderFactoryManager`.
|
||||
However, if we have a use case to selectively disable only one processor, then you have to set `auto-startup` on the individual binding for that processor.
|
||||
Let us assume that we don't want our `process3` function to auto start.
|
||||
This is a `BiFunction` with two input bindings - `process3-in-0` and `process3-in-1`.
|
||||
In order to avoid auto start for this processor, you can pick any of these input bindings and set `auto-startup` on them.
|
||||
It does not matter which binding you pick; if you wish, you can set `auto-startup` to `false` on both of them, but one will be sufficient.
|
||||
Because they share the same factory bean, you don't have to set autoStartup to false on both bindings, but it probably makes sense to do so, for clarity.
|
||||
|
||||
Here is the Spring Cloud Stream property that you can use to disable auto startup for this processor.
|
||||
|
||||
```
|
||||
spring.cloud.stream.bindings.process3-in-0.consumer.auto-startup: false
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
spring.cloud.stream.bindings.process3-in-1.consumer.auto-startup: false
|
||||
```
|
||||
|
||||
Then, you can manually start the processor either using the REST endpoint or using the `BindingsEndpoint` API as shown below.
|
||||
For this, you need to ensure that you have the Spring Boot actuator dependency on the classpath.
|
||||
|
||||
```
|
||||
curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/process3-in-0
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
@Autowired
|
||||
BindingsEndpoint endpoint;
|
||||
|
||||
@Bean
|
||||
public ApplicationRunner runner() {
|
||||
return args -> {
|
||||
endpoint.changeState("process3-in-0", State.STARTED);
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
See https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#binding_visualization_control[this section] from the reference docs for more details on this mechanism.
|
||||
|
||||
NOTE: When controlling the bindings by disabling `auto-startup` as described in this section, please note that this is only available for consumer bindings.
|
||||
In other words, if you use the producer binding, `process3-out-0`, that does not have any effect in terms of disabling the auto starting of the processor, although this producer binding uses the same `StreamsBuilderFactoryBean` as the consumer bindings.
|
||||
|
||||
=== Tracing using Spring Cloud Sleuth
|
||||
|
||||
When Spring Cloud Sleuth is on the classpath of a Spring Cloud Stream Kafka Streams binder based application, both its consumer and producer are automatically instrumented with tracing information.
|
||||
@@ -2019,12 +2121,6 @@ Arbitrary consumer properties at the binder level.
|
||||
producerProperties::
|
||||
Arbitrary producer properties at the binder level.
|
||||
|
||||
includeStoppedProcessorsForHealthCheck::
|
||||
When bindings for processors are stopped through actuator, then this processor will not participate in the health check by default.
|
||||
Set this property to `true` to enable health check for all processors including the ones that are currently stopped through bindings actuator endpoint.
|
||||
+
|
||||
Default: false
|
||||
|
||||
==== Kafka Streams Producer Properties
|
||||
|
||||
The following properties are _only_ available for Kafka Streams producers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.`
|
||||
|
||||
@@ -151,9 +151,6 @@ Default: `false`.
|
||||
|
||||
spring.cloud.stream.kafka.binder.certificateStoreDirectory::
|
||||
When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem.
|
||||
This is true for both broker level certificates (`ssl.truststore.location` and `ssl.keystore.location`) and certificates intended for schema registry (`schema.registry.ssl.truststore.location` and `schema.registry.ssl.keystore.location`).
|
||||
Keep in mind that the truststore and keystore classpath locations must be provided under `spring.cloud.stream.kafka.binder.configuration...`.
|
||||
For example, `spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location`, ``spring.cloud.stream.kafka.binder.configuration.schema.registry.ssl.truststore.location`, etc.
|
||||
The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application.
|
||||
If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`.
|
||||
This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable.
|
||||
@@ -324,6 +321,12 @@ When using a transactional binder, the offset of a recovered record (e.g. when r
|
||||
Setting this property to `false` suppresses committing the offset of recovered record.
|
||||
+
|
||||
Default: true.
|
||||
commonErrorHandlerBeanName::
|
||||
`CommonErrorHandler` bean name to use per consumer binding.
|
||||
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
|
||||
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[reset-offsets]]
|
||||
==== Resetting Offsets
|
||||
|
||||
188
pom.xml
188
pom.xml
@@ -2,21 +2,30 @@
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
<packaging>pom</packaging>
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-build</artifactId>
|
||||
<version>3.0.5</version>
|
||||
<version>3.1.0-M2</version>
|
||||
<relativePath />
|
||||
</parent>
|
||||
<scm>
|
||||
<url>https://github.com/spring-cloud/spring-cloud-stream-binder-kafka</url>
|
||||
<connection>scm:git:git://github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
|
||||
</connection>
|
||||
<developerConnection>
|
||||
scm:git:ssh://git@github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
|
||||
</developerConnection>
|
||||
<tag>HEAD</tag>
|
||||
</scm>
|
||||
<properties>
|
||||
<java.version>1.8</java.version>
|
||||
<spring-kafka.version>2.6.12</spring-kafka.version>
|
||||
<spring-integration-kafka.version>5.4.12</spring-integration-kafka.version>
|
||||
<kafka.version>2.6.3</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.1.5</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.1.7-SNAPSHOT</spring-cloud-stream.version>
|
||||
<spring-kafka.version>2.8.0-M3</spring-kafka.version>
|
||||
<spring-integration-kafka.version>5.5.2</spring-integration-kafka.version>
|
||||
<kafka.version>2.8.0</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.2.0-M2</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.2.0-M2</spring-cloud-stream.version>
|
||||
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
|
||||
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
|
||||
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
|
||||
@@ -27,7 +36,7 @@
|
||||
<module>spring-cloud-stream-binder-kafka-core</module>
|
||||
<module>spring-cloud-stream-binder-kafka-streams</module>
|
||||
<module>docs</module>
|
||||
</modules>
|
||||
</modules>
|
||||
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
@@ -139,10 +148,6 @@
|
||||
<build>
|
||||
<pluginManagement>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>flatten-maven-plugin</artifactId>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
@@ -165,6 +170,16 @@
|
||||
</plugins>
|
||||
</pluginManagement>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
<version>${maven-compiler-plugin.version}</version>
|
||||
<configuration>
|
||||
<source>${java.version}</source>
|
||||
<target>${java.version}</target>
|
||||
<compilerArgument>-parameters</compilerArgument>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-checkstyle-plugin</artifactId>
|
||||
@@ -175,74 +190,91 @@
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>spring</id>
|
||||
<repositories>
|
||||
<repository>
|
||||
<id>spring-snapshots</id>
|
||||
<name>Spring Snapshots</name>
|
||||
<url>https://repo.spring.io/libs-snapshot-local</url>
|
||||
<snapshots>
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
<releases>
|
||||
<enabled>false</enabled>
|
||||
</releases>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>spring-milestones</id>
|
||||
<name>Spring Milestones</name>
|
||||
<url>https://repo.spring.io/libs-milestone-local</url>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>spring-releases</id>
|
||||
<name>Spring Releases</name>
|
||||
<url>https://repo.spring.io/release</url>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>rsocket-snapshots</id>
|
||||
<name>RSocket Snapshots</name>
|
||||
<url>https://oss.jfrog.org/oss-snapshot-local</url>
|
||||
<snapshots>
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
</repositories>
|
||||
<pluginRepositories>
|
||||
<pluginRepository>
|
||||
<id>spring-snapshots</id>
|
||||
<name>Spring Snapshots</name>
|
||||
<url>https://repo.spring.io/libs-snapshot-local</url>
|
||||
<snapshots>
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
<releases>
|
||||
<enabled>false</enabled>
|
||||
</releases>
|
||||
</pluginRepository>
|
||||
<pluginRepository>
|
||||
<id>spring-milestones</id>
|
||||
<name>Spring Milestones</name>
|
||||
<url>https://repo.spring.io/libs-milestone-local</url>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</pluginRepository>
|
||||
<pluginRepository>
|
||||
<id>spring-releases</id>
|
||||
<name>Spring Releases</name>
|
||||
<url>https://repo.spring.io/libs-release-local</url>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</pluginRepository>
|
||||
</pluginRepositories>
|
||||
|
||||
|
||||
</profile>
|
||||
<profile>
|
||||
<id>coverage</id>
|
||||
<activation>
|
||||
<property>
|
||||
<name>env.TRAVIS</name>
|
||||
<value>true</value>
|
||||
</property>
|
||||
</activation>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.jacoco</groupId>
|
||||
<artifactId>jacoco-maven-plugin</artifactId>
|
||||
<version>0.7.9</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>agent</id>
|
||||
<goals>
|
||||
<goal>prepare-agent</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>report</id>
|
||||
<phase>test</phase>
|
||||
<goals>
|
||||
<goal>report</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
</profiles>
|
||||
<repositories>
|
||||
<repository>
|
||||
<id>spring-snapshots</id>
|
||||
<name>Spring Snapshots</name>
|
||||
<url>https://repo.spring.io/libs-snapshot-local</url>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>spring-milestones</id>
|
||||
<name>Spring milestones</name>
|
||||
<url>https://repo.spring.io/libs-milestone-local</url>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>rsocket-snapshots</id>
|
||||
<name>RSocket Snapshots</name>
|
||||
<url>https://oss.jfrog.org/oss-snapshot-local</url>
|
||||
<snapshots>
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>spring-releases</id>
|
||||
<name>Spring Releases</name>
|
||||
<url>https://repo.spring.io/release</url>
|
||||
</repository>
|
||||
</repositories>
|
||||
<pluginRepositories>
|
||||
<pluginRepository>
|
||||
<id>spring-snapshots</id>
|
||||
<name>Spring Snapshots</name>
|
||||
<url>https://repo.spring.io/snapshot</url>
|
||||
<snapshots>
|
||||
<enabled>true</enabled>
|
||||
</snapshots>
|
||||
</pluginRepository>
|
||||
<pluginRepository>
|
||||
<id>spring-milestones</id>
|
||||
<name>Spring Milestones</name>
|
||||
<url>https://repo.spring.io/milestone</url>
|
||||
<snapshots>
|
||||
<enabled>false</enabled>
|
||||
</snapshots>
|
||||
</pluginRepository>
|
||||
<pluginRepository>
|
||||
<id>spring-releases</id>
|
||||
<name>Spring Releases</name>
|
||||
<url>https://repo.spring.io/release</url>
|
||||
</pluginRepository>
|
||||
</pluginRepositories>
|
||||
<reporting>
|
||||
<plugins>
|
||||
<plugin>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
|
||||
<description>Spring Cloud Starter Stream Kafka</description>
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
|
||||
<description>Spring Cloud Stream Kafka Binder Core</description>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2015-2021 the original author or authors.
|
||||
* Copyright 2015-2018 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -163,44 +163,24 @@ public class KafkaBinderConfigurationProperties {
|
||||
|
||||
private void moveCertsToFileSystemIfNecessary() {
|
||||
try {
|
||||
moveBrokerCertsIfApplicable();
|
||||
moveSchemaRegistryCertsIfApplicable();
|
||||
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
|
||||
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("ssl.truststore.location", fileSystemLocation);
|
||||
}
|
||||
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
|
||||
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("ssl.keystore.location", fileSystemLocation);
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
throw new IllegalStateException(e);
|
||||
}
|
||||
}
|
||||
|
||||
private void moveBrokerCertsIfApplicable() throws IOException {
|
||||
final String trustStoreLocation = this.configuration.get("ssl.truststore.location");
|
||||
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("ssl.truststore.location", fileSystemLocation);
|
||||
}
|
||||
final String keyStoreLocation = this.configuration.get("ssl.keystore.location");
|
||||
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("ssl.keystore.location", fileSystemLocation);
|
||||
}
|
||||
}
|
||||
|
||||
private void moveSchemaRegistryCertsIfApplicable() throws IOException {
|
||||
String trustStoreLocation = this.configuration.get("schema.registry.ssl.truststore.location");
|
||||
if (trustStoreLocation != null && trustStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(trustStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("schema.registry.ssl.truststore.location", fileSystemLocation);
|
||||
}
|
||||
final String keyStoreLocation = this.configuration.get("schema.registry.ssl.keystore.location");
|
||||
if (keyStoreLocation != null && keyStoreLocation.startsWith("classpath:")) {
|
||||
final String fileSystemLocation = moveCertToFileSystem(keyStoreLocation, this.certificateStoreDirectory);
|
||||
// Overriding the value with absolute filesystem path.
|
||||
this.configuration.put("schema.registry.ssl.keystore.location", fileSystemLocation);
|
||||
}
|
||||
}
|
||||
|
||||
private String moveCertToFileSystem(String classpathLocation, String fileSystemLocation) throws IOException {
|
||||
File targetFile;
|
||||
final String tempDir = System.getProperty("java.io.tmpdir");
|
||||
|
||||
@@ -210,6 +210,12 @@ public class KafkaConsumerProperties {
|
||||
*/
|
||||
private boolean txCommitRecovered = true;
|
||||
|
||||
/**
|
||||
* CommonErrorHandler bean name per consumer binding.
|
||||
* @since 3.2
|
||||
*/
|
||||
private String commonErrorHandlerBeanName;
|
||||
|
||||
/**
|
||||
* @return if each record needs to be acknowledged.
|
||||
*
|
||||
@@ -529,4 +535,11 @@ public class KafkaConsumerProperties {
|
||||
this.txCommitRecovered = txCommitRecovered;
|
||||
}
|
||||
|
||||
public String getCommonErrorHandlerBeanName() {
|
||||
return commonErrorHandlerBeanName;
|
||||
}
|
||||
|
||||
public void setCommonErrorHandlerBeanName(String commonErrorHandlerBeanName) {
|
||||
this.commonErrorHandlerBeanName = commonErrorHandlerBeanName;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -142,22 +142,4 @@ public class KafkaBinderConfigurationPropertiesTest {
|
||||
assertThat(configuration.get("ssl.keystore.location")).isEqualTo(
|
||||
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCertificateFilesAreMovedForSchemaRegistryConfiguration() {
|
||||
KafkaProperties kafkaProperties = new KafkaProperties();
|
||||
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
|
||||
new KafkaBinderConfigurationProperties(kafkaProperties);
|
||||
final Map<String, String> configuration = kafkaBinderConfigurationProperties.getConfiguration();
|
||||
configuration.put("schema.registry.ssl.truststore.location", "classpath:testclient.truststore");
|
||||
configuration.put("schema.registry.ssl.keystore.location", "classpath:testclient.keystore");
|
||||
kafkaBinderConfigurationProperties.setCertificateStoreDirectory("target");
|
||||
|
||||
kafkaBinderConfigurationProperties.getKafkaConnectionString();
|
||||
|
||||
assertThat(configuration.get("schema.registry.ssl.truststore.location")).isEqualTo(
|
||||
Paths.get(Files.currentFolder().toString(), "target", "testclient.truststore").toString());
|
||||
assertThat(configuration.get("schema.registry.ssl.keystore.location")).isEqualTo(
|
||||
Paths.get(Files.currentFolder().toString(), "target", "testclient.keystore").toString());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
</parent>
|
||||
|
||||
<properties>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2022 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -19,9 +19,7 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.regex.Pattern;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
@@ -42,10 +40,9 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.processor.Processor;
|
||||
import org.apache.kafka.streams.processor.ProcessorContext;
|
||||
import org.apache.kafka.streams.processor.TimestampExtractor;
|
||||
import org.apache.kafka.streams.processor.api.Processor;
|
||||
import org.apache.kafka.streams.processor.api.Record;
|
||||
import org.apache.kafka.streams.processor.api.RecordMetadata;
|
||||
import org.apache.kafka.streams.state.KeyValueStore;
|
||||
import org.apache.kafka.streams.state.StoreBuilder;
|
||||
|
||||
@@ -164,6 +161,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
//wrap the proxy created during the initial target type binding with real object (KTable)
|
||||
kTableWrapper.wrap((KTable<Object, Object>) table);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
|
||||
bindingServiceProperties.getConsumerProperties(input));
|
||||
arguments[index] = table;
|
||||
}
|
||||
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
|
||||
@@ -176,6 +175,8 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
//wrap the proxy created during the initial target type binding with real object (KTable)
|
||||
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
|
||||
bindingServiceProperties.getConsumerProperties(input));
|
||||
arguments[index] = table;
|
||||
}
|
||||
}
|
||||
@@ -448,15 +449,12 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
//See this issue for more context: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1003
|
||||
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
|
||||
AtomicBoolean matched = new AtomicBoolean();
|
||||
AtomicReference<String> topicObject = new AtomicReference<>();
|
||||
AtomicReference<Headers> headersObject = new AtomicReference<>();
|
||||
// Processor to retrieve the header value.
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched, topicObject, headersObject));
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched));
|
||||
// Branching based on event type match.
|
||||
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
|
||||
// Deserialize if we have a branch from above.
|
||||
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(
|
||||
topicObject.get(), headersObject.get(), ((Bytes) value).get()));
|
||||
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(null, ((Bytes) value).get()));
|
||||
return getkStream(bindingProperties, deserializedKStream, nativeDecoding);
|
||||
}
|
||||
return getkStream(bindingProperties, stream, nativeDecoding);
|
||||
@@ -551,18 +549,14 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
consumed);
|
||||
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
|
||||
AtomicBoolean matched = new AtomicBoolean();
|
||||
AtomicReference<String> topicObject = new AtomicReference<>();
|
||||
AtomicReference<Headers> headersObject = new AtomicReference<>();
|
||||
|
||||
final KStream<?, ?> stream = kTable.toStream();
|
||||
|
||||
// Processor to retrieve the header value.
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched, topicObject, headersObject));
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched));
|
||||
// Branching based on event type match.
|
||||
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
|
||||
// Deserialize if we have a branch from above.
|
||||
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(
|
||||
topicObject.get(), headersObject.get(), ((Bytes) value).get()));
|
||||
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(null, ((Bytes) value).get()));
|
||||
|
||||
return deserializedKStream.toTable();
|
||||
}
|
||||
@@ -587,27 +581,19 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
return consumed;
|
||||
}
|
||||
|
||||
private <K, V> Processor<K, V, Void, Void> eventTypeProcessor(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
AtomicBoolean matched, AtomicReference<String> topicObject, AtomicReference<Headers> headersObject) {
|
||||
return new Processor<K, V, Void, Void>() {
|
||||
private <K, V> Processor<K, V> eventTypeProcessor(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, AtomicBoolean matched) {
|
||||
return new Processor() {
|
||||
|
||||
org.apache.kafka.streams.processor.api.ProcessorContext<?, ?> context;
|
||||
ProcessorContext context;
|
||||
|
||||
@Override
|
||||
public void init(org.apache.kafka.streams.processor.api.ProcessorContext<Void, Void> context) {
|
||||
Processor.super.init(context);
|
||||
public void init(ProcessorContext context) {
|
||||
this.context = context;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void process(Record<K, V> record) {
|
||||
final Headers headers = record.headers();
|
||||
headersObject.set(headers);
|
||||
final Optional<RecordMetadata> optional = this.context.recordMetadata();
|
||||
if (optional.isPresent()) {
|
||||
final RecordMetadata recordMetadata = optional.get();
|
||||
topicObject.set(recordMetadata.topic());
|
||||
}
|
||||
public void process(Object key, Object value) {
|
||||
final Headers headers = this.context.headers();
|
||||
final Iterable<Header> eventTypeHeader = headers.headers(kafkaStreamsConsumerProperties.getEventTypeHeaderKey());
|
||||
if (eventTypeHeader != null && eventTypeHeader.iterator().hasNext()) {
|
||||
String eventTypeFromHeader = new String(eventTypeHeader.iterator().next().value());
|
||||
|
||||
@@ -17,7 +17,6 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
|
||||
import org.springframework.cloud.stream.binder.AbstractBinder;
|
||||
@@ -106,12 +105,6 @@ public class GlobalKTableBinder extends
|
||||
if (!streamsBuilderFactoryBean.isRunning()) {
|
||||
super.start();
|
||||
GlobalKTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
|
||||
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -122,10 +115,6 @@ public class GlobalKTableBinder extends
|
||||
super.stop();
|
||||
GlobalKTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
GlobalKTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
|
||||
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2022 the original author or authors.
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -30,8 +30,6 @@ import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.common.serialization.Serializer;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.KeyQueryMetadata;
|
||||
import org.apache.kafka.streams.StoreQueryParameters;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.errors.InvalidStateStoreException;
|
||||
import org.apache.kafka.streams.state.HostInfo;
|
||||
import org.apache.kafka.streams.state.QueryableStoreType;
|
||||
@@ -53,7 +51,6 @@ import org.springframework.util.StringUtils;
|
||||
* @author Soby Chacko
|
||||
* @author Renwei Han
|
||||
* @author Serhii Siryi
|
||||
* @author Nico Pommerening
|
||||
* @since 2.1.0
|
||||
*/
|
||||
public class InteractiveQueryService {
|
||||
@@ -84,33 +81,22 @@ public class InteractiveQueryService {
|
||||
*/
|
||||
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
|
||||
|
||||
final RetryTemplate retryTemplate = getRetryTemplate();
|
||||
RetryTemplate retryTemplate = new RetryTemplate();
|
||||
|
||||
KafkaStreams contextSpecificKafkaStreams = getThreadContextSpecificKafkaStreams();
|
||||
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
|
||||
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
|
||||
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
|
||||
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
|
||||
|
||||
retryTemplate.setBackOffPolicy(backOffPolicy);
|
||||
retryTemplate.setRetryPolicy(retryPolicy);
|
||||
|
||||
return retryTemplate.execute(context -> {
|
||||
T store = null;
|
||||
Throwable throwable = null;
|
||||
if (contextSpecificKafkaStreams != null) {
|
||||
try {
|
||||
store = contextSpecificKafkaStreams.store(
|
||||
StoreQueryParameters.fromNameAndType(
|
||||
storeName, storeType));
|
||||
}
|
||||
catch (InvalidStateStoreException e) {
|
||||
// pass through..
|
||||
throwable = e;
|
||||
}
|
||||
}
|
||||
if (store != null) {
|
||||
return store;
|
||||
}
|
||||
else if (contextSpecificKafkaStreams != null) {
|
||||
LOG.warn("Store " + storeName
|
||||
+ " could not be found in Streams context, falling back to all known Streams instances");
|
||||
}
|
||||
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
|
||||
|
||||
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
|
||||
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
|
||||
Throwable throwable = null;
|
||||
while (iterator.hasNext()) {
|
||||
try {
|
||||
store = iterator.next().store(storeName, storeType);
|
||||
@@ -123,36 +109,10 @@ public class InteractiveQueryService {
|
||||
if (store != null) {
|
||||
return store;
|
||||
}
|
||||
throw new IllegalStateException(
|
||||
"Error when retrieving state store: " + storeName,
|
||||
throwable);
|
||||
throw new IllegalStateException("Error when retrieving state store: " + storeName, throwable);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves the current {@link KafkaStreams} context if executing Thread is created by a Streams App (contains a matching application id in Thread's name).
|
||||
*
|
||||
* @return KafkaStreams instance associated with Thread
|
||||
*/
|
||||
private KafkaStreams getThreadContextSpecificKafkaStreams() {
|
||||
return this.kafkaStreamsRegistry.getKafkaStreams().stream()
|
||||
.filter(this::filterByThreadName).findAny().orElse(null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the supplied {@link KafkaStreams} instance belongs to the calling Thread by matching the Thread's name with the Streams Application Id.
|
||||
*
|
||||
* @param streams {@link KafkaStreams} instance to filter
|
||||
* @return true if Streams Instance is associated with Thread
|
||||
*/
|
||||
private boolean filterByThreadName(KafkaStreams streams) {
|
||||
String applicationId = kafkaStreamsRegistry.streamBuilderFactoryBean(
|
||||
streams).getStreamsConfiguration()
|
||||
.getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
// TODO: is there some better way to find out if a Stream App created the Thread?
|
||||
return Thread.currentThread().getName().contains(applicationId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the current {@link HostInfo} that the calling kafka streams application is
|
||||
* running on.
|
||||
@@ -181,7 +141,7 @@ public class InteractiveQueryService {
|
||||
* through all the consumer instances under the same application id and retrieves the
|
||||
* proper host.
|
||||
*
|
||||
* Note that the end user applications must provide `application.server` as a
|
||||
* Note that the end user applications must provide `applicaiton.server` as a
|
||||
* configuration property for all the application instances when calling this method.
|
||||
* If this is not available, then null maybe returned.
|
||||
* @param <K> generic type for key
|
||||
@@ -191,40 +151,11 @@ public class InteractiveQueryService {
|
||||
* @return the {@link HostInfo} where the key for the provided store is hosted currently
|
||||
*/
|
||||
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
|
||||
final RetryTemplate retryTemplate = getRetryTemplate();
|
||||
|
||||
|
||||
return retryTemplate.execute(context -> {
|
||||
Throwable throwable = null;
|
||||
try {
|
||||
final KeyQueryMetadata keyQueryMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
|
||||
.stream()
|
||||
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
|
||||
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
|
||||
if (keyQueryMetadata != null) {
|
||||
return keyQueryMetadata.activeHost();
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
throwable = e;
|
||||
}
|
||||
throw new IllegalStateException(
|
||||
"Error when retrieving state store", throwable != null ? throwable : new Throwable("Kafka Streams is not ready."));
|
||||
});
|
||||
}
|
||||
|
||||
private RetryTemplate getRetryTemplate() {
|
||||
RetryTemplate retryTemplate = new RetryTemplate();
|
||||
|
||||
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
|
||||
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
|
||||
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
|
||||
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
|
||||
|
||||
retryTemplate.setBackOffPolicy(backOffPolicy);
|
||||
retryTemplate.setRetryPolicy(retryPolicy);
|
||||
|
||||
return retryTemplate;
|
||||
final KeyQueryMetadata keyQueryMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
|
||||
.stream()
|
||||
.map((k) -> Optional.ofNullable(k.queryMetadataForKey(store, key, serializer)))
|
||||
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
|
||||
return keyQueryMetadata != null ? keyQueryMetadata.getActiveHost() : null;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -23,7 +23,6 @@ import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Produced;
|
||||
import org.apache.kafka.streams.processor.StreamPartitioner;
|
||||
@@ -135,12 +134,6 @@ class KStreamBinder extends
|
||||
if (!streamsBuilderFactoryBean.isRunning()) {
|
||||
super.start();
|
||||
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
|
||||
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -151,10 +144,6 @@ class KStreamBinder extends
|
||||
super.stop();
|
||||
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
|
||||
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -210,12 +199,6 @@ class KStreamBinder extends
|
||||
if (!streamsBuilderFactoryBean.isRunning()) {
|
||||
super.start();
|
||||
KStreamBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
|
||||
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -226,10 +209,6 @@ class KStreamBinder extends
|
||||
super.stop();
|
||||
KStreamBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
KStreamBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
|
||||
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
@@ -17,7 +17,6 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
|
||||
import org.springframework.cloud.stream.binder.AbstractBinder;
|
||||
@@ -107,12 +106,6 @@ class KTableBinder extends
|
||||
if (!streamsBuilderFactoryBean.isRunning()) {
|
||||
super.start();
|
||||
KTableBinder.this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
//If we cached the previous KafkaStreams object (from a binding stop on the actuator), remove it.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
if (kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().containsKey(applicationId)) {
|
||||
kafkaStreamsBindingInformationCatalogue.removePreviousKafkaStreamsForApplicationId(applicationId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -123,10 +116,6 @@ class KTableBinder extends
|
||||
super.stop();
|
||||
KTableBinder.this.kafkaStreamsRegistry.unregisterKafkaStreams(kafkaStreams);
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
//Caching the stopped KafkaStreams for health indicator purposes on the underlying processor.
|
||||
//See this issue for more details: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
KTableBinder.this.kafkaStreamsBindingInformationCatalogue.addPreviousKafkaStreamsForApplicationId(
|
||||
(String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG), kafkaStreams);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
@@ -19,7 +19,6 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
import java.lang.reflect.Method;
|
||||
import java.time.Duration;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
@@ -119,12 +118,7 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
|
||||
}
|
||||
else {
|
||||
boolean up = true;
|
||||
final Set<KafkaStreams> kafkaStreams = kafkaStreamsRegistry.getKafkaStreams();
|
||||
Set<KafkaStreams> allKafkaStreams = new HashSet<>(kafkaStreams);
|
||||
if (this.configurationProperties.isIncludeStoppedProcessorsForHealthCheck()) {
|
||||
allKafkaStreams.addAll(kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams().values());
|
||||
}
|
||||
for (KafkaStreams kStream : allKafkaStreams) {
|
||||
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
if (isKafkaStreams25) {
|
||||
up &= kStream.state().isRunningOrRebalancing();
|
||||
}
|
||||
@@ -162,8 +156,7 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
|
||||
}
|
||||
|
||||
if (isRunningResult) {
|
||||
final Set<ThreadMetadata> threadMetadata = kafkaStreams.localThreadsMetadata();
|
||||
for (ThreadMetadata metadata : threadMetadata) {
|
||||
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
|
||||
perAppdIdDetails.put("threadName", metadata.threadName());
|
||||
perAppdIdDetails.put("threadState", metadata.threadState());
|
||||
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
|
||||
@@ -179,19 +172,8 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
|
||||
}
|
||||
else {
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
|
||||
String applicationId = null;
|
||||
if (streamsBuilderFactoryBean != null) {
|
||||
applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
}
|
||||
else {
|
||||
final Map<String, KafkaStreams> stoppedKafkaStreamsPerBinding = kafkaStreamsBindingInformationCatalogue.getStoppedKafkaStreams();
|
||||
for (String appId : stoppedKafkaStreamsPerBinding.keySet()) {
|
||||
if (stoppedKafkaStreamsPerBinding.get(appId).equals(kafkaStreams)) {
|
||||
applicationId = appId;
|
||||
}
|
||||
}
|
||||
}
|
||||
details.put(applicationId, String.format("The processor with application.id %s is down. Current state: %s", applicationId, kafkaStreams.state()));
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
details.put(applicationId, String.format("The processor with application.id %s is down", applicationId));
|
||||
}
|
||||
return details;
|
||||
}
|
||||
|
||||
@@ -412,8 +412,8 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
KafkaStreamsBindingInformationCatalogue catalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
|
||||
@Nullable KafkaStreamsMicrometerListener listener) {
|
||||
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
|
||||
@Nullable KafkaStreamsMicrometerListener listener, KafkaProperties kafkaProperties) {
|
||||
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener, kafkaProperties);
|
||||
}
|
||||
|
||||
@Bean
|
||||
|
||||
@@ -26,7 +26,6 @@ import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
|
||||
@@ -55,14 +54,14 @@ public class KafkaStreamsBindingInformationCatalogue {
|
||||
|
||||
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
|
||||
|
||||
private final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> consumerPropertiesPerSbfb = new HashMap<>();
|
||||
|
||||
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
|
||||
|
||||
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
|
||||
|
||||
private final Map<Object, String> bindingNamesPerTarget = new HashMap<>();
|
||||
|
||||
private final Map<String, KafkaStreams> previousKafkaStreamsPerApplicationId = new HashMap<>();
|
||||
|
||||
private final Map<StreamsBuilderFactoryBean, List<ProducerFactory<byte[], byte[]>>> dlqProducerFactories = new HashMap<>();
|
||||
|
||||
/**
|
||||
@@ -140,11 +139,19 @@ public class KafkaStreamsBindingInformationCatalogue {
|
||||
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
|
||||
}
|
||||
|
||||
void addConsumerPropertiesPerSbfb(StreamsBuilderFactoryBean streamsBuilderFactoryBean, ConsumerProperties consumerProperties) {
|
||||
this.consumerPropertiesPerSbfb.computeIfAbsent(streamsBuilderFactoryBean, k -> new ArrayList<>());
|
||||
this.consumerPropertiesPerSbfb.get(streamsBuilderFactoryBean).add(consumerProperties);
|
||||
}
|
||||
|
||||
public Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> getConsumerPropertiesPerSbfb() {
|
||||
return this.consumerPropertiesPerSbfb;
|
||||
}
|
||||
|
||||
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
|
||||
return this.streamsBuilderFactoryBeanPerBinding;
|
||||
}
|
||||
|
||||
|
||||
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
|
||||
this.outboundKStreamResolvables.put(key, outboundResolvable);
|
||||
}
|
||||
@@ -206,35 +213,4 @@ public class KafkaStreamsBindingInformationCatalogue {
|
||||
}
|
||||
producerFactories.add(producerFactory);
|
||||
}
|
||||
|
||||
/**
|
||||
* Caching the previous KafkaStreams for the applicaiton.id when binding is stopped through actuator.
|
||||
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
*
|
||||
* @param applicationId application.id
|
||||
* @param kafkaStreams {@link KafkaStreams} object
|
||||
*/
|
||||
public void addPreviousKafkaStreamsForApplicationId(String applicationId, KafkaStreams kafkaStreams) {
|
||||
this.previousKafkaStreamsPerApplicationId.put(applicationId, kafkaStreams);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the previously cached KafkaStreams object.
|
||||
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
*
|
||||
* @param applicationId application.id
|
||||
*/
|
||||
public void removePreviousKafkaStreamsForApplicationId(String applicationId) {
|
||||
this.previousKafkaStreamsPerApplicationId.remove(applicationId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all stopped KafkaStreams objects through actuator binding stop.
|
||||
* See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1165
|
||||
*
|
||||
* @return stopped KafkaStreams objects map
|
||||
*/
|
||||
public Map<String, KafkaStreams> getStoppedKafkaStreams() {
|
||||
return this.previousKafkaStreamsPerApplicationId;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -529,6 +529,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
|
||||
bindingServiceProperties.getConsumerProperties(input));
|
||||
|
||||
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
|
||||
final Class<?> valueClass =
|
||||
|
||||
@@ -17,12 +17,12 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
@@ -37,9 +37,9 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
*/
|
||||
public class KafkaStreamsRegistry {
|
||||
|
||||
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
|
||||
private final Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new ConcurrentHashMap<>();
|
||||
|
||||
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
|
||||
private final Set<KafkaStreams> kafkaStreams = ConcurrentHashMap.newKeySet();
|
||||
|
||||
Set<KafkaStreams> getKafkaStreams() {
|
||||
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();
|
||||
|
||||
@@ -320,6 +320,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(inboundName, streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
|
||||
bindingServiceProperties.getConsumerProperties(inboundName));
|
||||
|
||||
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
|
||||
if (streamListenerParameterAdapter.supports(stream.getClass(),
|
||||
methodParameter)) {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,7 +18,6 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.UUID;
|
||||
@@ -70,7 +69,6 @@ import org.springframework.util.StringUtils;
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @author Lei Chen
|
||||
* @author Eduard Domínguez
|
||||
*/
|
||||
public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
|
||||
@@ -98,14 +96,14 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
KafkaStreamsConsumerProperties extendedConsumerProperties) {
|
||||
String keySerdeString = extendedConsumerProperties.getKeySerde();
|
||||
|
||||
return getKeySerde(keySerdeString, extendedConsumerProperties.getConfiguration());
|
||||
return getKeySerde(keySerdeString);
|
||||
}
|
||||
|
||||
public Serde<?> getInboundKeySerde(
|
||||
KafkaStreamsConsumerProperties extendedConsumerProperties, ResolvableType resolvableType) {
|
||||
String keySerdeString = extendedConsumerProperties.getKeySerde();
|
||||
|
||||
return getKeySerde(keySerdeString, resolvableType, extendedConsumerProperties.getConfiguration());
|
||||
return getKeySerde(keySerdeString, resolvableType);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -122,7 +120,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
String valueSerdeString = extendedConsumerProperties.getValueSerde();
|
||||
try {
|
||||
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
|
||||
valueSerde = getValueSerde(valueSerdeString, extendedConsumerProperties.getConfiguration());
|
||||
valueSerde = getValueSerde(valueSerdeString);
|
||||
}
|
||||
else {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
@@ -142,7 +140,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
String valueSerdeString = extendedConsumerProperties.getValueSerde();
|
||||
try {
|
||||
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
|
||||
valueSerde = getValueSerde(valueSerdeString, resolvableType, extendedConsumerProperties.getConfiguration());
|
||||
valueSerde = getValueSerde(valueSerdeString, resolvableType);
|
||||
}
|
||||
else {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
@@ -160,11 +158,11 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
* @return configurd {@link Serde} for the outbound key.
|
||||
*/
|
||||
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties) {
|
||||
return getKeySerde(properties.getKeySerde(), properties.getConfiguration());
|
||||
return getKeySerde(properties.getKeySerde());
|
||||
}
|
||||
|
||||
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties, ResolvableType resolvableType) {
|
||||
return getKeySerde(properties.getKeySerde(), resolvableType, properties.getConfiguration());
|
||||
return getKeySerde(properties.getKeySerde(), resolvableType);
|
||||
}
|
||||
|
||||
|
||||
@@ -181,7 +179,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
try {
|
||||
if (producerProperties.isUseNativeEncoding()) {
|
||||
valueSerde = getValueSerde(
|
||||
kafkaStreamsProducerProperties.getValueSerde(), kafkaStreamsProducerProperties.getConfiguration());
|
||||
kafkaStreamsProducerProperties.getValueSerde());
|
||||
}
|
||||
else {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
@@ -199,7 +197,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
try {
|
||||
if (producerProperties.isUseNativeEncoding()) {
|
||||
valueSerde = getValueSerde(
|
||||
kafkaStreamsProducerProperties.getValueSerde(), resolvableType, kafkaStreamsProducerProperties.getConfiguration());
|
||||
kafkaStreamsProducerProperties.getValueSerde(), resolvableType);
|
||||
}
|
||||
else {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
@@ -217,7 +215,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
* @return {@link Serde} for the state store key.
|
||||
*/
|
||||
public Serde<?> getStateStoreKeySerde(String keySerdeString) {
|
||||
return getKeySerde(keySerdeString, (Map<String, ?>) null);
|
||||
return getKeySerde(keySerdeString);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -227,14 +225,14 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
*/
|
||||
public Serde<?> getStateStoreValueSerde(String valueSerdeString) {
|
||||
try {
|
||||
return getValueSerde(valueSerdeString, (Map<String, ?>) null);
|
||||
return getValueSerde(valueSerdeString);
|
||||
}
|
||||
catch (ClassNotFoundException ex) {
|
||||
throw new IllegalStateException("Serde class not found: ", ex);
|
||||
}
|
||||
}
|
||||
|
||||
private Serde<?> getKeySerde(String keySerdeString, Map<String, ?> extendedConfiguration) {
|
||||
private Serde<?> getKeySerde(String keySerdeString) {
|
||||
Serde<?> keySerde;
|
||||
try {
|
||||
if (StringUtils.hasText(keySerdeString)) {
|
||||
@@ -243,7 +241,8 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
else {
|
||||
keySerde = getFallbackSerde("default.key.serde");
|
||||
}
|
||||
keySerde.configure(combineStreamConfigProperties(extendedConfiguration), true);
|
||||
keySerde.configure(this.streamConfigGlobalProperties, true);
|
||||
|
||||
}
|
||||
catch (ClassNotFoundException ex) {
|
||||
throw new IllegalStateException("Serde class not found: ", ex);
|
||||
@@ -251,7 +250,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
return keySerde;
|
||||
}
|
||||
|
||||
private Serde<?> getKeySerde(String keySerdeString, ResolvableType resolvableType, Map<String, ?> extendedConfiguration) {
|
||||
private Serde<?> getKeySerde(String keySerdeString, ResolvableType resolvableType) {
|
||||
Serde<?> keySerde = null;
|
||||
try {
|
||||
if (StringUtils.hasText(keySerdeString)) {
|
||||
@@ -268,7 +267,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
keySerde = Serdes.ByteArray();
|
||||
}
|
||||
}
|
||||
keySerde.configure(combineStreamConfigProperties(extendedConfiguration), true);
|
||||
keySerde.configure(this.streamConfigGlobalProperties, true);
|
||||
}
|
||||
catch (ClassNotFoundException ex) {
|
||||
throw new IllegalStateException("Serde class not found: ", ex);
|
||||
@@ -381,7 +380,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
}
|
||||
|
||||
|
||||
private Serde<?> getValueSerde(String valueSerdeString, Map<String, ?> extendedConfiguration)
|
||||
private Serde<?> getValueSerde(String valueSerdeString)
|
||||
throws ClassNotFoundException {
|
||||
Serde<?> valueSerde;
|
||||
if (StringUtils.hasText(valueSerdeString)) {
|
||||
@@ -390,7 +389,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
else {
|
||||
valueSerde = getFallbackSerde("default.value.serde");
|
||||
}
|
||||
valueSerde.configure(combineStreamConfigProperties(extendedConfiguration), false);
|
||||
valueSerde.configure(this.streamConfigGlobalProperties, false);
|
||||
return valueSerde;
|
||||
}
|
||||
|
||||
@@ -404,7 +403,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private Serde<?> getValueSerde(String valueSerdeString, ResolvableType resolvableType, Map<String, ?> extendedConfiguration)
|
||||
private Serde<?> getValueSerde(String valueSerdeString, ResolvableType resolvableType)
|
||||
throws ClassNotFoundException {
|
||||
Serde<?> valueSerde = null;
|
||||
if (StringUtils.hasText(valueSerdeString)) {
|
||||
@@ -423,7 +422,7 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
}
|
||||
}
|
||||
valueSerde.configure(combineStreamConfigProperties(extendedConfiguration), false);
|
||||
valueSerde.configure(streamConfigGlobalProperties, false);
|
||||
return valueSerde;
|
||||
}
|
||||
|
||||
@@ -431,15 +430,4 @@ public class KeyValueSerdeResolver implements ApplicationContextAware {
|
||||
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
|
||||
context = (ConfigurableApplicationContext) applicationContext;
|
||||
}
|
||||
|
||||
private Map<String, ?> combineStreamConfigProperties(Map<String, ?> extendedConfiguration) {
|
||||
if (extendedConfiguration != null && !extendedConfiguration.isEmpty()) {
|
||||
Map<String, Object> streamConfiguration = new HashMap(this.streamConfigGlobalProperties);
|
||||
streamConfiguration.putAll(extendedConfiguration);
|
||||
return streamConfiguration;
|
||||
}
|
||||
else {
|
||||
return this.streamConfigGlobalProperties;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,9 +16,15 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
|
||||
|
||||
import org.springframework.beans.factory.DisposableBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.ConsumerProperties;
|
||||
import org.springframework.context.SmartLifecycle;
|
||||
import org.springframework.kafka.KafkaException;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
@@ -38,7 +44,7 @@ import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
|
||||
*
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
public class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
|
||||
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
|
||||
|
||||
@@ -50,19 +56,23 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
|
||||
private volatile boolean running;
|
||||
|
||||
private final KafkaProperties kafkaProperties;
|
||||
|
||||
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
|
||||
KafkaStreamsMicrometerListener listener) {
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
|
||||
KafkaStreamsMicrometerListener listener,
|
||||
KafkaProperties kafkaProperties) {
|
||||
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
|
||||
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
|
||||
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
|
||||
this.listener = listener;
|
||||
this.kafkaProperties = kafkaProperties;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isAutoStartup() {
|
||||
return true;
|
||||
return this.kafkaProperties == null || this.kafkaProperties.getStreams().isAutoStartup();
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -79,13 +89,24 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
try {
|
||||
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeans();
|
||||
int n = 0;
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
if (this.listener != null) {
|
||||
streamsBuilderFactoryBean.addListener(this.listener);
|
||||
}
|
||||
streamsBuilderFactoryBean.start();
|
||||
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
// By default, we shut down the client if there is an uncaught exception in the application.
|
||||
// Users can override this by customizing SBFB. See this issue for more details:
|
||||
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
|
||||
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception ->
|
||||
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
|
||||
// Starting the stream.
|
||||
final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> bindingServicePropertiesPerSbfb =
|
||||
this.kafkaStreamsBindingInformationCatalogue.getConsumerPropertiesPerSbfb();
|
||||
final List<ConsumerProperties> consumerProperties = bindingServicePropertiesPerSbfb.get(streamsBuilderFactoryBean);
|
||||
final boolean autoStartupDisabledOnAtLeastOneConsumerBinding = consumerProperties.stream().anyMatch(consumerProperties1 -> !consumerProperties1.isAutoStartup());
|
||||
if (!autoStartupDisabledOnAtLeastOneConsumerBinding) {
|
||||
streamsBuilderFactoryBean.start();
|
||||
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
}
|
||||
}
|
||||
if (this.kafkaStreamsBinderMetrics != null) {
|
||||
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
|
||||
|
||||
@@ -29,7 +29,6 @@ import java.util.function.BiConsumer;
|
||||
import java.util.function.BiFunction;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Function;
|
||||
import java.util.regex.Pattern;
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.stream.Stream;
|
||||
|
||||
@@ -97,7 +96,6 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
|
||||
final List<String> collect = concat.collect(Collectors.toList());
|
||||
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
|
||||
collect.removeIf(Pattern.compile(".*_registration").asPredicate());
|
||||
|
||||
onlySingleFunction = collect.size() == 1;
|
||||
collect.stream()
|
||||
|
||||
@@ -74,7 +74,6 @@ public class KafkaStreamsBinderConfigurationProperties
|
||||
*/
|
||||
private DeserializationExceptionHandler deserializationExceptionHandler;
|
||||
|
||||
private boolean includeStoppedProcessorsForHealthCheck;
|
||||
|
||||
public Map<String, Functions> getFunctions() {
|
||||
return functions;
|
||||
@@ -128,14 +127,6 @@ public class KafkaStreamsBinderConfigurationProperties
|
||||
this.deserializationExceptionHandler = deserializationExceptionHandler;
|
||||
}
|
||||
|
||||
public boolean isIncludeStoppedProcessorsForHealthCheck() {
|
||||
return includeStoppedProcessorsForHealthCheck;
|
||||
}
|
||||
|
||||
public void setIncludeStoppedProcessorsForHealthCheck(boolean includeStoppedProcessorsForHealthCheck) {
|
||||
this.includeStoppedProcessorsForHealthCheck = includeStoppedProcessorsForHealthCheck;
|
||||
}
|
||||
|
||||
public static class StateStoreRetry {
|
||||
|
||||
private int maxAttempts = 1;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2017-2022 the original author or authors.
|
||||
* Copyright 2017-2020 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,18 +18,15 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.common.serialization.IntegerSerializer;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.common.serialization.StringSerializer;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.KeyQueryMetadata;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
@@ -55,6 +52,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -70,7 +68,6 @@ import static org.mockito.internal.verification.VerificationModeFactory.times;
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
* @author Gary Russell
|
||||
* @author Nico Pommerening
|
||||
*/
|
||||
public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
|
||||
@@ -108,9 +105,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
|
||||
kafkaStreamsRegistry.registerKafkaStreams(mock);
|
||||
Mockito.when(mock.isRunning()).thenReturn(true);
|
||||
Properties mockProperties = new Properties();
|
||||
mockProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "fooApp");
|
||||
Mockito.when(mock.getStreamsConfiguration()).thenReturn(mockProperties);
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
|
||||
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
|
||||
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
|
||||
@@ -124,38 +118,10 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
catch (Exception ignored) {
|
||||
|
||||
}
|
||||
|
||||
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testStateStoreRetrievalRetryForHostInfoService() {
|
||||
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
|
||||
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
|
||||
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
|
||||
kafkaStreamsRegistry.registerKafkaStreams(mock);
|
||||
Mockito.when(mock.isRunning()).thenReturn(true);
|
||||
Properties mockProperties = new Properties();
|
||||
mockProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, "foobarApp-123");
|
||||
Mockito.when(mock.getStreamsConfiguration()).thenReturn(mockProperties);
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
|
||||
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
|
||||
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
|
||||
InteractiveQueryService interactiveQueryService = new InteractiveQueryService(kafkaStreamsRegistry,
|
||||
binderConfigurationProperties);
|
||||
|
||||
QueryableStoreType<ReadOnlyKeyValueStore<Object, Object>> storeType = QueryableStoreTypes.keyValueStore();
|
||||
final StringSerializer serializer = new StringSerializer();
|
||||
try {
|
||||
interactiveQueryService.getHostInfo("foo", "fooKey", serializer);
|
||||
}
|
||||
catch (Exception ignored) {
|
||||
|
||||
}
|
||||
Mockito.verify(mockKafkaStreams, times(3))
|
||||
.queryMetadataForKey("foo", "fooKey", serializer);
|
||||
}
|
||||
|
||||
@Test
|
||||
@Ignore
|
||||
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
|
||||
@@ -257,6 +223,11 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
return new Foo(interactiveQueryService);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public CleanupConfig cleanupConfig() {
|
||||
return new CleanupConfig(false, true);
|
||||
}
|
||||
|
||||
static class Foo {
|
||||
|
||||
InteractiveQueryService interactiveQueryService;
|
||||
|
||||
@@ -20,9 +20,6 @@ import java.util.Map;
|
||||
import java.util.Properties;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import com.fasterxml.jackson.databind.JavaType;
|
||||
import com.fasterxml.jackson.databind.type.TypeFactory;
|
||||
import org.apache.kafka.common.header.Headers;
|
||||
import org.apache.kafka.common.security.JaasUtils;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
@@ -31,22 +28,18 @@ import org.junit.Before;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.beans.DirectFieldAccessor;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.springframework.boot.builder.SpringApplicationBuilder;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KeyValueSerdeResolver;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
|
||||
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
* @author Eduard Domínguez
|
||||
*/
|
||||
public class KafkaStreamsBinderBootstrapTest {
|
||||
|
||||
@@ -118,7 +111,7 @@ public class KafkaStreamsBinderBootstrapTest {
|
||||
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
|
||||
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.configuration.spring.json.value.type.method=" + this.getClass().getName() + ".determineType",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.configuration.spring.json.value.type.method=com.test.MyClass",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
|
||||
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
@@ -141,28 +134,10 @@ public class KafkaStreamsBinderBootstrapTest {
|
||||
final StreamsBuilderFactoryBean input3SBFB = applicationContext.getBean("&stream-builder-input3", StreamsBuilderFactoryBean.class);
|
||||
final Properties streamsConfiguration3 = input3SBFB.getStreamsConfiguration();
|
||||
assertThat(streamsConfiguration3.containsKey("spring.json.value.type.method")).isFalse();
|
||||
applicationContext.getBean(KeyValueSerdeResolver.class);
|
||||
|
||||
String configuredSerdeTypeResolver = (String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
|
||||
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeResolver.arg$2");
|
||||
|
||||
assertThat(this.getClass().getName() + ".determineType").isEqualTo(configuredSerdeTypeResolver);
|
||||
|
||||
String configuredKeyDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
|
||||
.getPropertyValue("taskTopology.processorNodes[0].keyDeserializer.typeMapper.classIdFieldName"));
|
||||
assertThat(DefaultJackson2JavaTypeMapper.KEY_DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredKeyDeserializerFieldName);
|
||||
|
||||
String configuredValueDeserializerFieldName = ((String) new DirectFieldAccessor(input2SBFB.getKafkaStreams())
|
||||
.getPropertyValue("taskTopology.processorNodes[0].valDeserializer.typeMapper.classIdFieldName"));
|
||||
assertThat(DefaultJackson2JavaTypeMapper.DEFAULT_CLASSID_FIELD_NAME).isEqualTo(configuredValueDeserializerFieldName);
|
||||
|
||||
applicationContext.close();
|
||||
}
|
||||
|
||||
public static JavaType determineType(byte[] data, Headers headers) {
|
||||
return TypeFactory.defaultInstance().constructParametricType(Map.class, String.class, String.class);
|
||||
}
|
||||
|
||||
@SpringBootApplication
|
||||
static class SimpleKafkaStreamsApplication {
|
||||
|
||||
@@ -174,7 +149,7 @@ public class KafkaStreamsBinderBootstrapTest {
|
||||
}
|
||||
|
||||
@Bean
|
||||
public Consumer<KTable<Map<String, String>, Map<String, String>>> input2() {
|
||||
public Consumer<KTable<Object, String>> input2() {
|
||||
return s -> {
|
||||
// No-op consumer
|
||||
};
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Arrays;
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
||||
@@ -48,6 +49,9 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
|
||||
|
||||
@ClassRule
|
||||
@@ -179,22 +183,30 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
|
||||
public static class WordCountProcessorApplication {
|
||||
|
||||
@Bean
|
||||
@SuppressWarnings("unchecked")
|
||||
@SuppressWarnings({"unchecked"})
|
||||
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
|
||||
|
||||
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
|
||||
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
|
||||
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
|
||||
|
||||
return input -> input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.groupBy((key, value) -> value)
|
||||
.windowedBy(TimeWindows.of(5000))
|
||||
.count(Materialized.as("WordCounts-branch"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))))
|
||||
.branch(isEnglish, isFrench, isSpanish);
|
||||
return input -> {
|
||||
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.groupBy((key, value) -> value)
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
|
||||
.count(Materialized.as("WordCounts-branch"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))))
|
||||
.split()
|
||||
.branch(isEnglish)
|
||||
.branch(isFrench)
|
||||
.branch(isSpanish)
|
||||
.noDefaultBranch();
|
||||
|
||||
return stringKStreamMap.values().toArray(new KStream[0]);
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
import java.util.Date;
|
||||
@@ -51,6 +52,7 @@ import org.springframework.cloud.stream.binder.Binding;
|
||||
import org.springframework.cloud.stream.binder.DefaultBinding;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.StreamsBuilderFactoryManager;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
|
||||
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
|
||||
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
|
||||
@@ -73,7 +75,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts", "counts-1", "counts-2");
|
||||
"counts", "counts-1", "counts-2", "counts-5", "counts-6");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
|
||||
|
||||
@@ -89,7 +91,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2", "counts-5", "counts-6");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
@@ -176,22 +178,23 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
|
||||
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-5",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-5",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words-1", "counts-1");
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words-5", "counts-5");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -203,6 +206,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunctionWithCustomProducerStreamPartitioner",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
|
||||
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
|
||||
@@ -234,6 +238,90 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamBinderAutoStartup() throws Exception {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.kafka.streams.auto-startup=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-3",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-3",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
final StreamsBuilderFactoryManager streamsBuilderFactoryManager = context.getBean(StreamsBuilderFactoryManager.class);
|
||||
assertThat(streamsBuilderFactoryManager.isAutoStartup()).isFalse();
|
||||
assertThat(streamsBuilderFactoryManager.isRunning()).isFalse();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamIndividualBindingAutoStartup() throws Exception {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-4",
|
||||
"--spring.cloud.stream.bindings.process-in-0.consumer.auto-startup=false",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-4",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
|
||||
assertThat(streamsBuilderFactoryBean.isRunning()).isFalse();
|
||||
streamsBuilderFactoryBean.start();
|
||||
assertThat(streamsBuilderFactoryBean.isRunning()).isTrue();
|
||||
}
|
||||
}
|
||||
|
||||
// The following test verifies the fixes made for this issue:
|
||||
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
|
||||
@Test
|
||||
public void testOutboundNullValueIsHandledGracefully()
|
||||
throws Exception {
|
||||
SpringApplication app = new SpringApplication(OutboundNullApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-6",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-6",
|
||||
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=false",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("words-6");
|
||||
template.sendDefault("foobar");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
|
||||
"counts-6");
|
||||
assertThat(cr.value() == null).isTrue();
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void receiveAndValidate(String in, String out) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
@@ -341,4 +429,20 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
static class OutboundNullApplication {
|
||||
|
||||
@Bean
|
||||
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
|
||||
return input -> input
|
||||
.flatMapValues(
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foobar-WordCounts"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, null));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -41,7 +41,13 @@ import org.junit.Test;
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.binder.Binder;
|
||||
import org.springframework.cloud.stream.binder.BinderFactory;
|
||||
import org.springframework.cloud.stream.binder.ConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
|
||||
import org.springframework.cloud.stream.binder.ProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
@@ -70,7 +76,7 @@ public class StreamToGlobalKTableFunctionTests {
|
||||
public void testStreamToGlobalKTable() throws Exception {
|
||||
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.definition=process",
|
||||
"--spring.cloud.stream.function.bindings.process-in-0=order",
|
||||
@@ -89,7 +95,44 @@ public class StreamToGlobalKTableFunctionTests {
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
|
||||
"StreamToGlobalKTableJoinFunctionTests-abc",
|
||||
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-2.consumer.topic.properties.cleanup.policy=compact",
|
||||
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
// Testing certain ancillary configuration of GlobalKTable around topics creation.
|
||||
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
|
||||
|
||||
BinderFactory binderFactory = context.getBeanFactory()
|
||||
.getBean(BinderFactory.class);
|
||||
|
||||
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
|
||||
.getBinder("kstream", KStream.class);
|
||||
|
||||
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
|
||||
.getExtendedConsumerProperties("process-in-0");
|
||||
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicy).isEqualTo("compact");
|
||||
|
||||
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
|
||||
.getBinder("globalktable", GlobalKTable.class);
|
||||
|
||||
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
|
||||
.getExtendedConsumerProperties("process-in-1");
|
||||
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyX).isEqualTo("compact");
|
||||
|
||||
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
|
||||
.getExtendedConsumerProperties("process-in-2");
|
||||
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyY).isEqualTo("compact");
|
||||
|
||||
|
||||
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
|
||||
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
|
||||
@@ -16,10 +16,12 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.BiConsumer;
|
||||
@@ -38,17 +40,28 @@ import org.apache.kafka.common.serialization.StringDeserializer;
|
||||
import org.apache.kafka.common.serialization.StringSerializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.Grouped;
|
||||
import org.apache.kafka.streams.kstream.JoinWindows;
|
||||
import org.apache.kafka.streams.kstream.Joined;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.StreamJoined;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.binder.Binder;
|
||||
import org.springframework.cloud.stream.binder.BinderFactory;
|
||||
import org.springframework.cloud.stream.binder.ConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
|
||||
import org.springframework.cloud.stream.binder.ProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -173,9 +186,8 @@ public class StreamToTableJoinFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
|
||||
@@ -187,6 +199,8 @@ public class StreamToTableJoinFunctionTests {
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
|
||||
"=StreamToTableJoinFunctionTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
// Input 1: Region per user (multiple records allowed per user).
|
||||
@@ -256,6 +270,30 @@ public class StreamToTableJoinFunctionTests {
|
||||
|
||||
assertThat(count == expectedClicksPerRegion.size()).isTrue();
|
||||
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
|
||||
|
||||
// Testing certain ancillary configuration of GlobalKTable around topics creation.
|
||||
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
|
||||
BinderFactory binderFactory = context.getBeanFactory()
|
||||
.getBean(BinderFactory.class);
|
||||
|
||||
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
|
||||
.getBinder("ktable", KTable.class);
|
||||
|
||||
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
|
||||
.getExtendedConsumerProperties("process-in-1");
|
||||
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyX).isEqualTo("compact");
|
||||
|
||||
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
|
||||
.getBinder("kstream", KStream.class);
|
||||
|
||||
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
|
||||
.getExtendedProducerProperties("process-out-0");
|
||||
|
||||
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyOutput).isEqualTo("compact");
|
||||
}
|
||||
finally {
|
||||
consumer.close();
|
||||
@@ -398,6 +436,34 @@ public class StreamToTableJoinFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTrivialSingleKTableInputAsNonDeclarative() {
|
||||
SpringApplication app = new SpringApplication(TrivialKTableApp.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
app.run("--server.port=0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString(),
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=" +
|
||||
"testTrivialSingleKTableInputAsNonDeclarative");
|
||||
//All we are verifying is that this application didn't throw any errors.
|
||||
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTwoKStreamsCanBeJoined() {
|
||||
SpringApplication app = new SpringApplication(
|
||||
JoinProcessor.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
app.run("--server.port=0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString(),
|
||||
"--spring.application.name=" +
|
||||
"two-kstream-input-join-integ-test");
|
||||
//All we are verifying is that this application didn't throw any errors.
|
||||
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Tuple for a region and its associated number of clicks.
|
||||
*/
|
||||
@@ -439,9 +505,14 @@ public class StreamToTableJoinFunctionTests {
|
||||
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
|
||||
regionWithClicks.getClicks()))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
|
||||
.reduce(Long::sum)
|
||||
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
|
||||
.toStream()));
|
||||
}
|
||||
|
||||
@Bean
|
||||
public CleanupConfig cleanupConfig() {
|
||||
return new CleanupConfig(false, true);
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
@@ -456,9 +527,14 @@ public class StreamToTableJoinFunctionTests {
|
||||
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
|
||||
regionWithClicks.getClicks()))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
|
||||
.reduce(Long::sum)
|
||||
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
|
||||
.toStream());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public CleanupConfig cleanupConfig() {
|
||||
return new CleanupConfig(false, true);
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
@@ -475,4 +551,29 @@ public class StreamToTableJoinFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
public static class TrivialKTableApp {
|
||||
|
||||
public java.util.function.Consumer<KTable<String, String>> process() {
|
||||
return inputTable -> inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
public static class JoinProcessor {
|
||||
|
||||
public BiConsumer<KStream<String, String>, KStream<String, String>> testProcessor() {
|
||||
return (input1Stream, input2Stream) -> input1Stream
|
||||
.join(input2Stream,
|
||||
(event1, event2) -> null,
|
||||
JoinWindows.of(Duration.ofMillis(5)),
|
||||
StreamJoined.with(
|
||||
Serdes.String(),
|
||||
Serdes.String(),
|
||||
Serdes.String()
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.producer.ProducerRecord;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.assertj.core.util.Lists;
|
||||
import org.junit.Assert;
|
||||
@@ -45,6 +46,9 @@ import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderHealthIndicator;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.KafkaStreamsCustomizer;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -178,7 +182,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, topics);
|
||||
KafkaTestUtils.getRecords(consumer, 1000);
|
||||
|
||||
TimeUnit.SECONDS.sleep(2);
|
||||
TimeUnit.SECONDS.sleep(5);
|
||||
checkHealth(context, expected);
|
||||
}
|
||||
finally {
|
||||
@@ -281,6 +285,19 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
});
|
||||
}
|
||||
|
||||
@Bean
|
||||
public StreamsBuilderFactoryBeanConfigurer customizer() {
|
||||
return factoryBean -> {
|
||||
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
|
||||
@Override
|
||||
public void customize(KafkaStreams kafkaStreams) {
|
||||
kafkaStreams.setUncaughtExceptionHandler(exception ->
|
||||
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
|
||||
}
|
||||
});
|
||||
};
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public interface KafkaStreamsProcessorX {
|
||||
|
||||
@@ -42,6 +42,8 @@ import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -163,10 +165,15 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
|
||||
.count(Materialized.as("WordCounts")).toStream()
|
||||
.count(Materialized.as("WordCounts-tKWCWSIAP0")).toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key, value)));
|
||||
}
|
||||
|
||||
@Bean
|
||||
public CleanupConfig cleanupConfig() {
|
||||
return new CleanupConfig(false, true);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class WordCount {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2017-2018 the original author or authors.
|
||||
* Copyright 2017-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -21,6 +21,7 @@ import java.util.Arrays;
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
@@ -42,10 +43,6 @@ import org.junit.Test;
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.integration.test.util.TestUtils;
|
||||
@@ -57,7 +54,6 @@ import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
@@ -66,11 +62,11 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
* @author Soby Chacko
|
||||
* @author Gary Russell
|
||||
*/
|
||||
public class KafkaStreamsBinderWordCountIntegrationTests {
|
||||
public class KafkaStreamsBinderTombstoneTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts", "counts-1");
|
||||
"counts-1");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
@@ -85,7 +81,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts-1");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
@@ -93,31 +89,6 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
|
||||
throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.kafka.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words", "counts");
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testSendToTombstone()
|
||||
throws Exception {
|
||||
@@ -127,24 +98,22 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words-1",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts-1",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
|
||||
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words-1", "counts-1");
|
||||
// Assertions on StreamBuilderFactoryBean
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
|
||||
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
|
||||
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
|
||||
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
assertThat(kafkaStreams).isNotNull();
|
||||
// Ensure that concurrency settings are mapped to number of stream task
|
||||
@@ -200,26 +169,21 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
|
||||
}
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
static class WordCountProcessorApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<?, WordCount> process(
|
||||
@Input("input") KStream<Object, String> input) {
|
||||
@Bean
|
||||
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
|
||||
|
||||
return input
|
||||
.flatMapValues(
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
return input -> input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
|
||||
.windowedBy(TimeWindows.of(5000))
|
||||
.count(Materialized.as("foo-WordCounts"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null,
|
||||
new WordCount(key.key(), value,
|
||||
new Date(key.window().start()),
|
||||
new Date(key.window().end()))));
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))));
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -108,7 +108,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
|
||||
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
|
||||
"cleanupConfig", CleanupConfig.class);
|
||||
assertThat(cleanup.cleanupOnStart()).isFalse();
|
||||
assertThat(cleanup.cleanupOnStop()).isTrue();
|
||||
assertThat(cleanup.cleanupOnStop()).isFalse();
|
||||
}
|
||||
finally {
|
||||
context.close();
|
||||
|
||||
@@ -1,146 +0,0 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Arrays;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
import org.apache.kafka.streams.kstream.TimeWindows;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class OutboundValueNullSkippedConversionTest {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<String, String> consumer;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void tearDown() {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
// The following test verifies the fixes made for this issue:
|
||||
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
|
||||
@Test
|
||||
public void testOutboundNullValueIsHandledGracefully()
|
||||
throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
OutboundNullApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts",
|
||||
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.kafka.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("words");
|
||||
template.sendDefault("foobar");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
|
||||
"counts");
|
||||
assertThat(cr.value() == null).isTrue();
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
static class OutboundNullApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
|
||||
@Input("input") KStream<Object, String> input) {
|
||||
|
||||
return input
|
||||
.flatMapValues(
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, null));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,383 +0,0 @@
|
||||
/*
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Comparator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecords;
|
||||
import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
import org.apache.kafka.common.serialization.LongDeserializer;
|
||||
import org.apache.kafka.common.serialization.LongSerializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.Binder;
|
||||
import org.springframework.cloud.stream.binder.BinderFactory;
|
||||
import org.springframework.cloud.stream.binder.ConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
|
||||
import org.springframework.cloud.stream.binder.ProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.support.serializer.JsonDeserializer;
|
||||
import org.springframework.kafka.support.serializer.JsonSerializer;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class StreamToGlobalKTableJoinIntegrationTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"enriched-order");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<Long, EnrichedOrder> consumer;
|
||||
|
||||
@Test
|
||||
public void testStreamToGlobalKTable() throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=orders",
|
||||
"--spring.cloud.stream.bindings.input-x.destination=customers",
|
||||
"--spring.cloud.stream.bindings.input-y.destination=products",
|
||||
"--spring.cloud.stream.bindings.output.destination=enriched-order",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
|
||||
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
try {
|
||||
// Testing certain ancillary configuration of GlobalKTable around topics creation.
|
||||
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
|
||||
|
||||
BinderFactory binderFactory = context.getBeanFactory()
|
||||
.getBean(BinderFactory.class);
|
||||
|
||||
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
|
||||
.getBinder("kstream", KStream.class);
|
||||
|
||||
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
|
||||
.getExtendedConsumerProperties("input");
|
||||
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicy).isEqualTo("compact");
|
||||
|
||||
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
|
||||
.getBinder("globalktable", GlobalKTable.class);
|
||||
|
||||
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
|
||||
.getExtendedConsumerProperties("input-x");
|
||||
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyX).isEqualTo("compact");
|
||||
|
||||
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
|
||||
.getExtendedConsumerProperties("input-y");
|
||||
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyY).isEqualTo("compact");
|
||||
|
||||
Map<String, Object> senderPropsCustomer = KafkaTestUtils
|
||||
.producerProps(embeddedKafka);
|
||||
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
LongSerializer.class);
|
||||
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
JsonSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
|
||||
senderPropsCustomer);
|
||||
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer,
|
||||
true);
|
||||
template.setDefaultTopic("customers");
|
||||
for (long i = 0; i < 5; i++) {
|
||||
final Customer customer = new Customer();
|
||||
customer.setName("customer-" + i);
|
||||
template.sendDefault(i, customer);
|
||||
}
|
||||
|
||||
Map<String, Object> senderPropsProduct = KafkaTestUtils
|
||||
.producerProps(embeddedKafka);
|
||||
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
LongSerializer.class);
|
||||
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
JsonSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
|
||||
senderPropsProduct);
|
||||
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct,
|
||||
true);
|
||||
productTemplate.setDefaultTopic("products");
|
||||
|
||||
for (long i = 0; i < 5; i++) {
|
||||
final Product product = new Product();
|
||||
product.setName("product-" + i);
|
||||
productTemplate.sendDefault(i, product);
|
||||
}
|
||||
|
||||
Map<String, Object> senderPropsOrder = KafkaTestUtils
|
||||
.producerProps(embeddedKafka);
|
||||
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
LongSerializer.class);
|
||||
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
JsonSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
|
||||
senderPropsOrder);
|
||||
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
|
||||
orderTemplate.setDefaultTopic("orders");
|
||||
|
||||
for (long i = 0; i < 5; i++) {
|
||||
final Order order = new Order();
|
||||
order.setCustomerId(i);
|
||||
order.setProductId(i);
|
||||
orderTemplate.sendDefault(i, order);
|
||||
}
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group",
|
||||
"false", embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
LongDeserializer.class);
|
||||
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
JsonDeserializer.class);
|
||||
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
|
||||
"org.springframework.cloud.stream.binder.kafka.streams.integration."
|
||||
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
|
||||
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
|
||||
|
||||
int count = 0;
|
||||
long start = System.currentTimeMillis();
|
||||
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
|
||||
do {
|
||||
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils
|
||||
.getRecords(consumer);
|
||||
count = count + records.count();
|
||||
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
|
||||
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
|
||||
}
|
||||
}
|
||||
while (count < 5 && (System.currentTimeMillis() - start) < 30000);
|
||||
|
||||
assertThat(count == 5).isTrue();
|
||||
assertThat(enrichedOrders.size() == 5).isTrue();
|
||||
|
||||
enrichedOrders.sort(Comparator.comparing(o -> o.key));
|
||||
|
||||
for (int i = 0; i < 5; i++) {
|
||||
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders
|
||||
.get(i);
|
||||
assertThat(enrichedOrderKeyValue.key == i).isTrue();
|
||||
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
|
||||
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
|
||||
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
|
||||
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i))
|
||||
.isTrue();
|
||||
assertThat(enrichedOrder.getProduct().name.equals("product-" + i))
|
||||
.isTrue();
|
||||
}
|
||||
pfCustomer.destroy();
|
||||
pfProduct.destroy();
|
||||
pfOrder.destroy();
|
||||
consumer.close();
|
||||
}
|
||||
finally {
|
||||
context.close();
|
||||
}
|
||||
}
|
||||
|
||||
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
|
||||
|
||||
@Input("input-x")
|
||||
GlobalKTable<?, ?> inputX();
|
||||
|
||||
@Input("input-y")
|
||||
GlobalKTable<?, ?> inputY();
|
||||
|
||||
}
|
||||
|
||||
@EnableBinding(CustomGlobalKTableProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
public static class OrderEnricherApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<Long, EnrichedOrder> process(
|
||||
@Input("input") KStream<Long, Order> ordersStream,
|
||||
@Input("input-x") GlobalKTable<Long, Customer> customers,
|
||||
@Input("input-y") GlobalKTable<Long, Product> products) {
|
||||
|
||||
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
|
||||
customers, (orderId, order) -> order.getCustomerId(),
|
||||
(order, customer) -> new CustomerOrder(customer, order));
|
||||
|
||||
return customerOrdersStream.join(products,
|
||||
(orderId, customerOrder) -> customerOrder.productId(),
|
||||
(customerOrder, product) -> {
|
||||
EnrichedOrder enrichedOrder = new EnrichedOrder();
|
||||
enrichedOrder.setProduct(product);
|
||||
enrichedOrder.setCustomer(customerOrder.customer);
|
||||
enrichedOrder.setOrder(customerOrder.order);
|
||||
return enrichedOrder;
|
||||
});
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class Order {
|
||||
|
||||
long customerId;
|
||||
|
||||
long productId;
|
||||
|
||||
public long getCustomerId() {
|
||||
return customerId;
|
||||
}
|
||||
|
||||
public void setCustomerId(long customerId) {
|
||||
this.customerId = customerId;
|
||||
}
|
||||
|
||||
public long getProductId() {
|
||||
return productId;
|
||||
}
|
||||
|
||||
public void setProductId(long productId) {
|
||||
this.productId = productId;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class Customer {
|
||||
|
||||
String name;
|
||||
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
|
||||
public void setName(String name) {
|
||||
this.name = name;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class Product {
|
||||
|
||||
String name;
|
||||
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
|
||||
public void setName(String name) {
|
||||
this.name = name;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class EnrichedOrder {
|
||||
|
||||
Product product;
|
||||
|
||||
Customer customer;
|
||||
|
||||
Order order;
|
||||
|
||||
public Product getProduct() {
|
||||
return product;
|
||||
}
|
||||
|
||||
public void setProduct(Product product) {
|
||||
this.product = product;
|
||||
}
|
||||
|
||||
public Customer getCustomer() {
|
||||
return customer;
|
||||
}
|
||||
|
||||
public void setCustomer(Customer customer) {
|
||||
this.customer = customer;
|
||||
}
|
||||
|
||||
public Order getOrder() {
|
||||
return order;
|
||||
}
|
||||
|
||||
public void setOrder(Order order) {
|
||||
this.order = order;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
private static class CustomerOrder {
|
||||
|
||||
private final Customer customer;
|
||||
|
||||
private final Order order;
|
||||
|
||||
CustomerOrder(final Customer customer, final Order order) {
|
||||
this.customer = customer;
|
||||
this.order = order;
|
||||
}
|
||||
|
||||
long productId() {
|
||||
return order.getProductId();
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,497 +0,0 @@
|
||||
/*
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecords;
|
||||
import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
import org.apache.kafka.common.serialization.LongDeserializer;
|
||||
import org.apache.kafka.common.serialization.LongSerializer;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.common.serialization.StringDeserializer;
|
||||
import org.apache.kafka.common.serialization.StringSerializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.JoinWindows;
|
||||
import org.apache.kafka.streams.kstream.Joined;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.Binder;
|
||||
import org.springframework.cloud.stream.binder.BinderFactory;
|
||||
import org.springframework.cloud.stream.binder.ConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
|
||||
import org.springframework.cloud.stream.binder.ProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class StreamToTableJoinIntegrationTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
@Test
|
||||
public void testStreamToTable() throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
CountClicksPerRegionApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
Consumer<String, Long> consumer;
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
|
||||
"false", embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
StringDeserializer.class);
|
||||
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
LongDeserializer.class);
|
||||
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
|
||||
|
||||
ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.input-x.destination=user-regions-1",
|
||||
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
|
||||
+ "=StreamToTableJoinIntegrationTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.output.producer.topic.properties.cleanup.policy=compact",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
try {
|
||||
// Testing certain ancillary configuration of GlobalKTable around topics creation.
|
||||
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
|
||||
BinderFactory binderFactory = context.getBeanFactory()
|
||||
.getBean(BinderFactory.class);
|
||||
|
||||
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
|
||||
.getBinder("ktable", KTable.class);
|
||||
|
||||
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
|
||||
.getExtendedConsumerProperties("input-x");
|
||||
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyX).isEqualTo("compact");
|
||||
|
||||
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
|
||||
.getBinder("kstream", KStream.class);
|
||||
|
||||
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
|
||||
.getExtendedProducerProperties("output");
|
||||
|
||||
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
|
||||
|
||||
assertThat(cleanupPolicyOutput).isEqualTo("compact");
|
||||
|
||||
// Input 1: Region per user (multiple records allowed per user).
|
||||
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
|
||||
"alice", "asia"), /* Alice lived in Asia originally... */
|
||||
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
|
||||
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
|
||||
"europe"), /* ...but moved to Europe some time later. */
|
||||
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
|
||||
|
||||
Map<String, Object> senderProps1 = KafkaTestUtils
|
||||
.producerProps(embeddedKafka);
|
||||
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
|
||||
senderProps1);
|
||||
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
|
||||
template1.setDefaultTopic("user-regions-1");
|
||||
|
||||
for (KeyValue<String, String> keyValue : userRegions) {
|
||||
template1.sendDefault(keyValue.key, keyValue.value);
|
||||
}
|
||||
|
||||
// Input 2: Clicks per user (multiple records allowed per user).
|
||||
List<KeyValue<String, Long>> userClicks = Arrays.asList(
|
||||
new KeyValue<>("alice", 13L), new KeyValue<>("bob", 4L),
|
||||
new KeyValue<>("chao", 25L), new KeyValue<>("bob", 19L),
|
||||
new KeyValue<>("dave", 56L), new KeyValue<>("eve", 78L),
|
||||
new KeyValue<>("alice", 40L), new KeyValue<>("fang", 99L));
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
LongSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("user-clicks-1");
|
||||
|
||||
for (KeyValue<String, Long> keyValue : userClicks) {
|
||||
template.sendDefault(keyValue.key, keyValue.value);
|
||||
}
|
||||
|
||||
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
|
||||
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 109L),
|
||||
new KeyValue<>("asia", 124L));
|
||||
|
||||
// Verify that we receive the expected data
|
||||
int count = 0;
|
||||
long start = System.currentTimeMillis();
|
||||
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
|
||||
do {
|
||||
ConsumerRecords<String, Long> records = KafkaTestUtils
|
||||
.getRecords(consumer);
|
||||
count = count + records.count();
|
||||
for (ConsumerRecord<String, Long> record : records) {
|
||||
actualClicksPerRegion
|
||||
.add(new KeyValue<>(record.key(), record.value()));
|
||||
}
|
||||
}
|
||||
while (count < expectedClicksPerRegion.size()
|
||||
&& (System.currentTimeMillis() - start) < 30000);
|
||||
|
||||
assertThat(count == expectedClicksPerRegion.size()).isTrue();
|
||||
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
|
||||
}
|
||||
finally {
|
||||
consumer.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest()
|
||||
throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
CountClicksPerRegionApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
Consumer<String, Long> consumer;
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
|
||||
"false", embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
StringDeserializer.class);
|
||||
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
LongDeserializer.class);
|
||||
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
|
||||
|
||||
// Produce data first to the input topic to test the startOffset setting on the
|
||||
// binding (which is set to earliest below).
|
||||
// Input 1: Clicks per user (multiple records allowed per user).
|
||||
List<KeyValue<String, Long>> userClicks = Arrays.asList(
|
||||
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
|
||||
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
|
||||
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
|
||||
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
|
||||
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L));
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
LongSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("user-clicks-2");
|
||||
|
||||
for (KeyValue<String, Long> keyValue : userClicks) {
|
||||
template.sendDefault(keyValue.key, keyValue.value);
|
||||
}
|
||||
// Thread.sleep(10000L);
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
|
||||
"--spring.cloud.stream.bindings.input-x.destination=user-regions-2",
|
||||
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
Thread.sleep(1000L);
|
||||
|
||||
// Input 2: Region per user (multiple records allowed per user).
|
||||
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
|
||||
"alice", "asia"), /* Alice lived in Asia originally... */
|
||||
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
|
||||
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
|
||||
"europe"), /* ...but moved to Europe some time later. */
|
||||
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
|
||||
|
||||
Map<String, Object> senderProps1 = KafkaTestUtils
|
||||
.producerProps(embeddedKafka);
|
||||
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
StringSerializer.class);
|
||||
|
||||
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
|
||||
senderProps1);
|
||||
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
|
||||
template1.setDefaultTopic("user-regions-2");
|
||||
|
||||
for (KeyValue<String, String> keyValue : userRegions) {
|
||||
template1.sendDefault(keyValue.key, keyValue.value);
|
||||
}
|
||||
|
||||
// Input 1: Clicks per user (multiple records allowed per user).
|
||||
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
|
||||
new KeyValue<>("bob", 4L), new KeyValue<>("chao", 25L),
|
||||
new KeyValue<>("bob", 19L), new KeyValue<>("dave", 56L),
|
||||
new KeyValue<>("eve", 78L), new KeyValue<>("fang", 99L));
|
||||
|
||||
for (KeyValue<String, Long> keyValue : userClicks1) {
|
||||
template.sendDefault(keyValue.key, keyValue.value);
|
||||
}
|
||||
|
||||
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
|
||||
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 56L),
|
||||
new KeyValue<>("asia", 124L),
|
||||
// 1000 alice entries which were there in the topic before the
|
||||
// consumer started.
|
||||
// Since we set the startOffset to earliest for the topic, it will
|
||||
// read them,
|
||||
// but the join fails to associate with a valid region, thus UNKNOWN.
|
||||
new KeyValue<>("UNKNOWN", 1000L));
|
||||
|
||||
// Verify that we receive the expected data
|
||||
int count = 0;
|
||||
long start = System.currentTimeMillis();
|
||||
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
|
||||
do {
|
||||
ConsumerRecords<String, Long> records = KafkaTestUtils
|
||||
.getRecords(consumer);
|
||||
count = count + records.count();
|
||||
for (ConsumerRecord<String, Long> record : records) {
|
||||
System.out.println("foobar: " + record.key() + "::" + record.value());
|
||||
actualClicksPerRegion
|
||||
.add(new KeyValue<>(record.key(), record.value()));
|
||||
}
|
||||
}
|
||||
while (count < expectedClicksPerRegion.size()
|
||||
&& (System.currentTimeMillis() - start) < 30000);
|
||||
|
||||
// TODO: Matched count is 3 and not 4 (expectedClicksPerRegion.size()) when running with full suite. Investigate why.
|
||||
// TODO: This behavior is only observed after the Spring Kafka upgrade to 2.5.0 and kafka client to 2.5.
|
||||
// TODO: Note that the test passes fine as a single test.
|
||||
assertThat(count).matches(
|
||||
matchedCount -> matchedCount == expectedClicksPerRegion.size() - 1 || matchedCount == expectedClicksPerRegion.size());
|
||||
assertThat(actualClicksPerRegion).containsAnyElementsOf(expectedClicksPerRegion);
|
||||
}
|
||||
finally {
|
||||
consumer.close();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTrivialSingleKTableInputAsNonDeclarative() {
|
||||
SpringApplication app = new SpringApplication(
|
||||
TrivialKTableApp.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
app.run("--server.port=0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString(),
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.application-id=" +
|
||||
"testTrivialSingleKTableInputAsNonDeclarative");
|
||||
//All we are verifying is that this application didn't throw any errors.
|
||||
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTwoKStreamsCanBeJoined() {
|
||||
SpringApplication app = new SpringApplication(
|
||||
JoinProcessor.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
app.run("--server.port=0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString(),
|
||||
"--spring.application.name=" +
|
||||
"two-kstream-input-join-integ-test");
|
||||
//All we are verifying is that this application didn't throw any errors.
|
||||
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessorX.class)
|
||||
@EnableAutoConfiguration
|
||||
public static class CountClicksPerRegionApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<String, Long> process(
|
||||
@Input("input") KStream<String, Long> userClicksStream,
|
||||
@Input("input-x") KTable<String, String> userRegionsTable) {
|
||||
|
||||
return userClicksStream
|
||||
.leftJoin(userRegionsTable,
|
||||
(clicks, region) -> new RegionWithClicks(
|
||||
region == null ? "UNKNOWN" : region, clicks),
|
||||
Joined.with(Serdes.String(), Serdes.Long(), null))
|
||||
.map((user, regionWithClicks) -> new KeyValue<>(
|
||||
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
|
||||
.reduce(Long::sum)
|
||||
.toStream();
|
||||
}
|
||||
|
||||
//This forces the state stores to be cleaned up before running the test.
|
||||
@Bean
|
||||
public CleanupConfig cleanupConfig() {
|
||||
return new CleanupConfig(true, false);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessorY.class)
|
||||
@EnableAutoConfiguration
|
||||
public static class TrivialKTableApp {
|
||||
|
||||
@StreamListener("input-y")
|
||||
public void process(KTable<String, String> inputTable) {
|
||||
inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
|
||||
}
|
||||
}
|
||||
|
||||
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
|
||||
|
||||
@Input("input-x")
|
||||
KTable<?, ?> inputX();
|
||||
|
||||
}
|
||||
|
||||
interface KafkaStreamsProcessorY {
|
||||
|
||||
@Input("input-y")
|
||||
KTable<?, ?> inputY();
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Tuple for a region and its associated number of clicks.
|
||||
*/
|
||||
private static final class RegionWithClicks {
|
||||
|
||||
private final String region;
|
||||
|
||||
private final long clicks;
|
||||
|
||||
RegionWithClicks(String region, long clicks) {
|
||||
if (region == null || region.isEmpty()) {
|
||||
throw new IllegalArgumentException("region must be set");
|
||||
}
|
||||
if (clicks < 0) {
|
||||
throw new IllegalArgumentException("clicks must not be negative");
|
||||
}
|
||||
this.region = region;
|
||||
this.clicks = clicks;
|
||||
}
|
||||
|
||||
public String getRegion() {
|
||||
return region;
|
||||
}
|
||||
|
||||
public long getClicks() {
|
||||
return clicks;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
interface BindingsForTwoKStreamJoinTest {
|
||||
|
||||
String INPUT_1 = "input_1";
|
||||
String INPUT_2 = "input_2";
|
||||
|
||||
@Input(INPUT_1)
|
||||
KStream<String, String> input_1();
|
||||
|
||||
@Input(INPUT_2)
|
||||
KStream<String, String> input_2();
|
||||
}
|
||||
|
||||
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
|
||||
@EnableAutoConfiguration
|
||||
public static class JoinProcessor {
|
||||
|
||||
@StreamListener
|
||||
public void testProcessor(
|
||||
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
|
||||
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
|
||||
input1Stream
|
||||
.join(input2Stream,
|
||||
(event1, event2) -> null,
|
||||
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
|
||||
Joined.with(
|
||||
Serdes.String(),
|
||||
Serdes.String(),
|
||||
Serdes.String()
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,236 +0,0 @@
|
||||
/*
|
||||
* Copyright 2017-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Arrays;
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.Predicate;
|
||||
import org.apache.kafka.streams.kstream.TimeWindows;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.Output;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Marius Bogoevici
|
||||
* @author Soby Chacko
|
||||
* @author Gary Russell
|
||||
*/
|
||||
public class WordCountMultipleBranchesIntegrationTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts", "foo", "bar");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<String, String> consumer;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() throws Exception {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx",
|
||||
"false", embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void tearDown() {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.output1.destination=counts",
|
||||
"--spring.cloud.stream.bindings.output2.destination=foo",
|
||||
"--spring.cloud.stream.bindings.output3.destination=bar",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
|
||||
+ "=WordCountMultipleBranchesIntegrationTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
try {
|
||||
receiveAndValidate(context);
|
||||
}
|
||||
finally {
|
||||
context.close();
|
||||
}
|
||||
}
|
||||
|
||||
private void receiveAndValidate(ConfigurableApplicationContext context)
|
||||
throws Exception {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("words");
|
||||
template.sendDefault("english");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
|
||||
"counts");
|
||||
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
|
||||
|
||||
template.sendDefault("french");
|
||||
template.sendDefault("french");
|
||||
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
|
||||
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
|
||||
|
||||
template.sendDefault("spanish");
|
||||
template.sendDefault("spanish");
|
||||
template.sendDefault("spanish");
|
||||
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
|
||||
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
|
||||
}
|
||||
|
||||
@EnableBinding(KStreamProcessorX.class)
|
||||
@EnableAutoConfiguration
|
||||
public static class WordCountProcessorApplication {
|
||||
|
||||
@StreamListener("input")
|
||||
@SendTo({ "output1", "output2", "output3" })
|
||||
@SuppressWarnings("unchecked")
|
||||
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
|
||||
|
||||
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
|
||||
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
|
||||
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
|
||||
|
||||
return input
|
||||
.flatMapValues(
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.groupBy((key, value) -> value).windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
|
||||
.count(Materialized.as("WordCounts-multi")).toStream()
|
||||
.map((key, value) -> new KeyValue<>(null,
|
||||
new WordCount(key.key(), value,
|
||||
new Date(key.window().start()),
|
||||
new Date(key.window().end()))))
|
||||
.branch(isEnglish, isFrench, isSpanish);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
interface KStreamProcessorX {
|
||||
|
||||
@Input("input")
|
||||
KStream<?, ?> input();
|
||||
|
||||
@Output("output1")
|
||||
KStream<?, ?> output1();
|
||||
|
||||
@Output("output2")
|
||||
KStream<?, ?> output2();
|
||||
|
||||
@Output("output3")
|
||||
KStream<?, ?> output3();
|
||||
|
||||
}
|
||||
|
||||
static class WordCount {
|
||||
|
||||
private String word;
|
||||
|
||||
private long count;
|
||||
|
||||
private Date start;
|
||||
|
||||
private Date end;
|
||||
|
||||
WordCount(String word, long count, Date start, Date end) {
|
||||
this.word = word;
|
||||
this.count = count;
|
||||
this.start = start;
|
||||
this.end = end;
|
||||
}
|
||||
|
||||
public String getWord() {
|
||||
return word;
|
||||
}
|
||||
|
||||
public void setWord(String word) {
|
||||
this.word = word;
|
||||
}
|
||||
|
||||
public long getCount() {
|
||||
return count;
|
||||
}
|
||||
|
||||
public void setCount(long count) {
|
||||
this.count = count;
|
||||
}
|
||||
|
||||
public Date getStart() {
|
||||
return start;
|
||||
}
|
||||
|
||||
public void setStart(Date start) {
|
||||
this.start = start;
|
||||
}
|
||||
|
||||
public Date getEnd() {
|
||||
return end;
|
||||
}
|
||||
|
||||
public void setEnd(Date end) {
|
||||
this.end = end;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.7-SNAPSHOT</version>
|
||||
<version>3.2.0-M2</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
@@ -75,6 +75,11 @@
|
||||
<artifactId>kafka_2.13</artifactId>
|
||||
<classifier>test</classifier>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.awaitility</groupId>
|
||||
<artifactId>awaitility</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
</project>
|
||||
|
||||
@@ -75,7 +75,6 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerPro
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.cloud.stream.binding.DefaultPartitioningInterceptor;
|
||||
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
|
||||
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
|
||||
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
|
||||
@@ -103,11 +102,13 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.CommonErrorHandler;
|
||||
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
|
||||
import org.springframework.kafka.listener.ConsumerProperties;
|
||||
import org.springframework.kafka.listener.ContainerProperties;
|
||||
import org.springframework.kafka.listener.DefaultAfterRollbackProcessor;
|
||||
import org.springframework.kafka.listener.DefaultErrorHandler;
|
||||
import org.springframework.kafka.support.Acknowledgment;
|
||||
import org.springframework.kafka.support.KafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaders;
|
||||
@@ -419,10 +420,6 @@ public class KafkaMessageChannelBinder extends
|
||||
((PartitioningInterceptor) interceptor)
|
||||
.setPartitionCount(partitions.size());
|
||||
}
|
||||
else if (interceptor instanceof DefaultPartitioningInterceptor) {
|
||||
((DefaultPartitioningInterceptor) interceptor)
|
||||
.setPartitionCount(partitions.size());
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@@ -735,14 +732,17 @@ public class KafkaMessageChannelBinder extends
|
||||
kafkaMessageDrivenChannelAdapter.setApplicationContext(applicationContext);
|
||||
ErrorInfrastructure errorInfrastructure = registerErrorInfrastructure(destination,
|
||||
consumerGroup, extendedConsumerProperties);
|
||||
|
||||
if (!extendedConsumerProperties.isBatchMode()
|
||||
&& extendedConsumerProperties.getMaxAttempts() > 1
|
||||
&& transMan == null) {
|
||||
|
||||
kafkaMessageDrivenChannelAdapter
|
||||
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
|
||||
kafkaMessageDrivenChannelAdapter
|
||||
.setRecoveryCallback(errorInfrastructure.getRecoverer());
|
||||
if (!extendedConsumerProperties.getExtension().isEnableDlq()) {
|
||||
messageListenerContainer.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(0L, 0L)));
|
||||
}
|
||||
}
|
||||
else if (!extendedConsumerProperties.isBatchMode() && transMan != null) {
|
||||
messageListenerContainer.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(
|
||||
@@ -775,6 +775,12 @@ public class KafkaMessageChannelBinder extends
|
||||
else {
|
||||
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorInfrastructure.getErrorChannel());
|
||||
}
|
||||
final String commonErrorHandlerBeanName = extendedConsumerProperties.getExtension().getCommonErrorHandlerBeanName();
|
||||
if (StringUtils.hasText(commonErrorHandlerBeanName)) {
|
||||
final CommonErrorHandler commonErrorHandler = getApplicationContext().getBean(commonErrorHandlerBeanName,
|
||||
CommonErrorHandler.class);
|
||||
messageListenerContainer.setCommonErrorHandler(commonErrorHandler);
|
||||
}
|
||||
this.getContainerCustomizer().configure(messageListenerContainer, destination.getName(), group);
|
||||
this.ackModeInfo.put(destination, messageListenerContainer.getContainerProperties().getAckMode());
|
||||
return kafkaMessageDrivenChannelAdapter;
|
||||
|
||||
@@ -22,7 +22,6 @@ import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.binder.MeterBinder;
|
||||
|
||||
import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
|
||||
@@ -81,22 +80,12 @@ import org.springframework.messaging.converter.MessageConverter;
|
||||
* @author Artem Bilan
|
||||
* @author Aldo Sinanaj
|
||||
*/
|
||||
@Configuration
|
||||
@Configuration(proxyBeanMethods = false)
|
||||
@ConditionalOnMissingBean(Binder.class)
|
||||
@Import({ KafkaAutoConfiguration.class, KafkaBinderHealthIndicatorConfiguration.class })
|
||||
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
|
||||
public class KafkaBinderConfiguration {
|
||||
|
||||
@Autowired
|
||||
private KafkaExtendedBindingProperties kafkaExtendedBindingProperties;
|
||||
|
||||
@SuppressWarnings("rawtypes")
|
||||
@Autowired
|
||||
private ProducerListener producerListener;
|
||||
|
||||
@Autowired
|
||||
private KafkaProperties kafkaProperties;
|
||||
|
||||
@Bean
|
||||
KafkaBinderConfigurationProperties configurationProperties(
|
||||
KafkaProperties kafkaProperties) {
|
||||
@@ -106,12 +95,12 @@ public class KafkaBinderConfiguration {
|
||||
@Bean
|
||||
KafkaTopicProvisioner provisioningProvider(
|
||||
KafkaBinderConfigurationProperties configurationProperties,
|
||||
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
|
||||
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer, KafkaProperties kafkaProperties) {
|
||||
return new KafkaTopicProvisioner(configurationProperties,
|
||||
this.kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
@SuppressWarnings({"rawtypes", "unchecked"})
|
||||
@Bean
|
||||
KafkaMessageChannelBinder kafkaMessageChannelBinder(
|
||||
KafkaBinderConfigurationProperties configurationProperties,
|
||||
@@ -125,16 +114,17 @@ public class KafkaBinderConfiguration {
|
||||
ObjectProvider<DlqDestinationResolver> dlqDestinationResolver,
|
||||
ObjectProvider<ClientFactoryCustomizer> clientFactoryCustomizer,
|
||||
ObjectProvider<ConsumerConfigCustomizer> consumerConfigCustomizer,
|
||||
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer
|
||||
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer,
|
||||
ProducerListener producerListener, KafkaExtendedBindingProperties kafkaExtendedBindingProperties
|
||||
) {
|
||||
|
||||
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
|
||||
configurationProperties, provisioningProvider,
|
||||
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique(),
|
||||
dlqPartitionFunction.getIfUnique(), dlqDestinationResolver.getIfUnique());
|
||||
kafkaMessageChannelBinder.setProducerListener(this.producerListener);
|
||||
kafkaMessageChannelBinder.setProducerListener(producerListener);
|
||||
kafkaMessageChannelBinder
|
||||
.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
|
||||
.setExtendedBindingProperties(kafkaExtendedBindingProperties);
|
||||
kafkaMessageChannelBinder.setProducerMessageHandlerCustomizer(messageHandlerCustomizer);
|
||||
kafkaMessageChannelBinder.setConsumerEndpointCustomizer(consumerCustomizer);
|
||||
kafkaMessageChannelBinder.setClientFactoryCustomizer(clientFactoryCustomizer.getIfUnique());
|
||||
|
||||
@@ -66,12 +66,11 @@ import org.apache.kafka.common.serialization.StringDeserializer;
|
||||
import org.apache.kafka.common.serialization.StringSerializer;
|
||||
import org.assertj.core.api.Assertions;
|
||||
import org.assertj.core.api.Condition;
|
||||
import org.junit.Before;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Ignore;
|
||||
import org.junit.Rule;
|
||||
import org.junit.Test;
|
||||
import org.junit.rules.ExpectedException;
|
||||
import org.awaitility.Awaitility;
|
||||
import org.junit.jupiter.api.BeforeAll;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.TestInfo;
|
||||
|
||||
import org.springframework.beans.DirectFieldAccessor;
|
||||
import org.springframework.cloud.stream.binder.Binder;
|
||||
@@ -119,8 +118,11 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.CommonErrorHandler;
|
||||
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.ContainerProperties;
|
||||
import org.springframework.kafka.listener.DefaultErrorHandler;
|
||||
import org.springframework.kafka.listener.MessageListenerContainer;
|
||||
import org.springframework.kafka.support.Acknowledgment;
|
||||
import org.springframework.kafka.support.KafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaders;
|
||||
@@ -128,8 +130,10 @@ import org.springframework.kafka.support.SendResult;
|
||||
import org.springframework.kafka.support.TopicPartitionOffset;
|
||||
import org.springframework.kafka.support.converter.BatchMessagingMessageConverter;
|
||||
import org.springframework.kafka.support.converter.MessagingMessageConverter;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
|
||||
import org.springframework.kafka.test.context.EmbeddedKafka;
|
||||
import org.springframework.kafka.test.core.BrokerAddress;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.Message;
|
||||
import org.springframework.messaging.MessageChannel;
|
||||
@@ -144,6 +148,7 @@ import org.springframework.messaging.support.GenericMessage;
|
||||
import org.springframework.messaging.support.MessageBuilder;
|
||||
import org.springframework.util.Assert;
|
||||
import org.springframework.util.MimeTypeUtils;
|
||||
import org.springframework.util.backoff.FixedBackOff;
|
||||
import org.springframework.util.concurrent.ListenableFuture;
|
||||
import org.springframework.util.concurrent.SettableListenableFuture;
|
||||
|
||||
@@ -158,29 +163,28 @@ import static org.mockito.Mockito.mock;
|
||||
* @author Henryk Konsek
|
||||
* @author Gary Russell
|
||||
*/
|
||||
@EmbeddedKafka(count = 1, controlledShutdown = true, topics = "error.pollableDlq.group-pcWithDlq", brokerProperties = {"transaction.state.log.replication.factor=1",
|
||||
"transaction.state.log.min.isr=1"})
|
||||
public class KafkaBinderTests extends
|
||||
// @checkstyle:off
|
||||
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
|
||||
|
||||
// @checkstyle:on
|
||||
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
|
||||
|
||||
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
|
||||
|
||||
@Rule
|
||||
public ExpectedException expectedProvisioningException = ExpectedException.none();
|
||||
|
||||
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class
|
||||
.getSimpleName();
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10,
|
||||
"error.pollableDlq.group-pcWithDlq")
|
||||
.brokerProperty("transaction.state.log.replication.factor", "1")
|
||||
.brokerProperty("transaction.state.log.min.isr", "1");
|
||||
|
||||
private KafkaTestBinder binder;
|
||||
|
||||
private AdminClient adminClient;
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka;
|
||||
|
||||
@BeforeAll
|
||||
public static void setup() {
|
||||
embeddedKafka = EmbeddedKafkaCondition.getBroker();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ExtendedConsumerProperties<KafkaConsumerProperties> createConsumerProperties() {
|
||||
final ExtendedConsumerProperties<KafkaConsumerProperties> kafkaConsumerProperties = new ExtendedConsumerProperties<>(
|
||||
@@ -191,8 +195,12 @@ public class KafkaBinderTests extends
|
||||
return kafkaConsumerProperties;
|
||||
}
|
||||
|
||||
private ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
|
||||
return this.createProducerProperties(null);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
|
||||
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties(TestInfo testInto) {
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = new ExtendedProducerProperties<>(
|
||||
new KafkaProducerProperties());
|
||||
producerProperties.getExtension().setSync(true);
|
||||
@@ -246,8 +254,8 @@ public class KafkaBinderTests extends
|
||||
private KafkaBinderConfigurationProperties createConfigurationProperties() {
|
||||
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties(
|
||||
new TestKafkaProperties());
|
||||
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
|
||||
.getBrokerAddresses();
|
||||
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
|
||||
|
||||
List<String> bAddresses = new ArrayList<>();
|
||||
for (BrokerAddress bAddress : brokerAddresses) {
|
||||
bAddresses.add(bAddress.toString());
|
||||
@@ -274,15 +282,14 @@ public class KafkaBinderTests extends
|
||||
return KafkaHeaders.OFFSET;
|
||||
}
|
||||
|
||||
@Before
|
||||
@BeforeEach
|
||||
public void init() {
|
||||
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
|
||||
if (multiplier != null) {
|
||||
timeoutMultiplier = Double.parseDouble(multiplier);
|
||||
}
|
||||
|
||||
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
|
||||
.getBrokerAddresses();
|
||||
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
|
||||
List<String> bAddresses = new ArrayList<>();
|
||||
for (BrokerAddress bAddress : brokerAddresses) {
|
||||
bAddresses.add(bAddress.toString());
|
||||
@@ -552,7 +559,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testSendAndReceiveNoOriginalContentType() throws Exception {
|
||||
public void testSendAndReceiveNoOriginalContentType(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
|
||||
BindingProperties producerBindingProperties = createProducerBindingProperties(
|
||||
@@ -602,7 +609,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testSendAndReceive() throws Exception {
|
||||
public void testSendAndReceive(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
BindingProperties outputBindingProperties = createProducerBindingProperties(
|
||||
createProducerProperties());
|
||||
@@ -728,7 +735,6 @@ public class KafkaBinderTests extends
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
@Ignore
|
||||
public void testDlqWithNativeSerializationEnabledOnDlqProducer() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
@@ -787,12 +793,10 @@ public class KafkaBinderTests extends
|
||||
.withPayload("foo").build();
|
||||
|
||||
moduleOutputChannel.send(message);
|
||||
|
||||
Message<?> receivedMessage = receive(dlqChannel, 5);
|
||||
assertThat(receivedMessage).isNotNull();
|
||||
assertThat(receivedMessage.getPayload()).isEqualTo("foo".getBytes());
|
||||
assertThat(handler.getInvocationCount())
|
||||
.isEqualTo(consumerProperties.getMaxAttempts());
|
||||
Awaitility.await().until(() -> handler.getInvocationCount() == consumerProperties.getMaxAttempts());
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
|
||||
.isEqualTo("foo.bar".getBytes(StandardCharsets.UTF_8));
|
||||
@@ -1050,7 +1054,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
AbstractMessageListenerContainer container = TestUtils.getPropertyValue(consumerBinding,
|
||||
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
|
||||
assertThat(container.getContainerProperties().getTopicPartitionsToAssign().length)
|
||||
assertThat(container.getContainerProperties().getTopicPartitions().length)
|
||||
.isEqualTo(4); // 2 topics 2 partitions each
|
||||
if (transactional) {
|
||||
assertThat(TestUtils.getPropertyValue(container.getAfterRollbackProcessor(), "kafkaTemplate")).isNotNull();
|
||||
@@ -1061,7 +1065,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
String dlqTopic = useDlqDestResolver ? "foo.dlq" : "error.dlqTest." + uniqueBindingId + ".0.testGroup";
|
||||
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
|
||||
embeddedKafka.getBrokersAsString()))) {
|
||||
if (useDlqDestResolver) {
|
||||
List<NewTopic> nonProvisionedDlqTopics = new ArrayList<>();
|
||||
NewTopic nTopic = new NewTopic(dlqTopic, 3, (short) 1);
|
||||
@@ -1298,6 +1302,113 @@ public class KafkaBinderTests extends
|
||||
producerBinding.unbind();
|
||||
}
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testRetriesWithoutDlq() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
BindingProperties producerBindingProperties = createProducerBindingProperties(
|
||||
producerProperties);
|
||||
|
||||
DirectChannel moduleOutputChannel = createBindableChannel("output",
|
||||
producerBindingProperties);
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
consumerProperties.setMaxAttempts(2);
|
||||
consumerProperties.setBackOffInitialInterval(100);
|
||||
consumerProperties.setBackOffMaxInterval(150);
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
|
||||
moduleInputChannel.subscribe(handler);
|
||||
long uniqueBindingId = System.currentTimeMillis();
|
||||
Binding<MessageChannel> producerBinding = binder.bindProducer(
|
||||
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
|
||||
producerProperties);
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
|
||||
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
|
||||
consumerProperties);
|
||||
|
||||
String testMessagePayload = "test." + UUID.randomUUID();
|
||||
Message<byte[]> testMessage = MessageBuilder
|
||||
.withPayload(testMessagePayload.getBytes()).build();
|
||||
moduleOutputChannel.send(testMessage);
|
||||
|
||||
Thread.sleep(3000);
|
||||
|
||||
// Since we don't have a DLQ, assert that we are invoking the handler exactly the same number of times
|
||||
// as set in consumerProperties.maxAttempt and not the default set by Spring Kafka (10 times).
|
||||
assertThat(handler.getInvocationCount())
|
||||
.isEqualTo(consumerProperties.getMaxAttempts());
|
||||
binderBindUnbindLatency();
|
||||
consumerBinding.unbind();
|
||||
producerBinding.unbind();
|
||||
}
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testCommonErrorHandlerBeanNameOnConsumerBinding() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
BindingProperties producerBindingProperties = createProducerBindingProperties(
|
||||
producerProperties);
|
||||
|
||||
DirectChannel moduleOutputChannel = createBindableChannel("output",
|
||||
producerBindingProperties);
|
||||
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
CommonErrorHandler commonErrorHandler = new DefaultErrorHandler(new FixedBackOff(0L, 0L)) {
|
||||
@Override
|
||||
public void handleRemaining(Exception thrownException, List<ConsumerRecord<?, ?>> records,
|
||||
Consumer<?, ?> consumer, MessageListenerContainer container) {
|
||||
super.handleRemaining(thrownException, records, consumer, container);
|
||||
latch.countDown();
|
||||
}
|
||||
};
|
||||
|
||||
ConfigurableApplicationContext context = TestUtils.getPropertyValue(binder,
|
||||
"binder.applicationContext", ConfigurableApplicationContext.class);
|
||||
context.getBeanFactory().registerSingleton("fooCommonErrorHandler", commonErrorHandler);
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
consumerProperties.setMaxAttempts(2);
|
||||
consumerProperties.setBackOffInitialInterval(100);
|
||||
consumerProperties.setBackOffMaxInterval(150);
|
||||
consumerProperties.getExtension().setCommonErrorHandlerBeanName("fooCommonErrorHandler");
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
|
||||
moduleInputChannel.subscribe(handler);
|
||||
long uniqueBindingId = System.currentTimeMillis();
|
||||
Binding<MessageChannel> producerBinding = binder.bindProducer(
|
||||
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
|
||||
producerProperties);
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
|
||||
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
|
||||
consumerProperties);
|
||||
|
||||
String testMessagePayload = "test." + UUID.randomUUID();
|
||||
Message<byte[]> testMessage = MessageBuilder
|
||||
.withPayload(testMessagePayload.getBytes()).build();
|
||||
moduleOutputChannel.send(testMessage);
|
||||
|
||||
Thread.sleep(3000);
|
||||
|
||||
//Assertions for the CommonErrorHandler configured on the consumer binding (commonErrorHandlerBeanName).
|
||||
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding,
|
||||
"lifecycle.messageListenerContainer.commonErrorHandler")).isSameAs(commonErrorHandler);
|
||||
latch.await(10, TimeUnit.SECONDS);
|
||||
|
||||
binderBindUnbindLatency();
|
||||
consumerBinding.unbind();
|
||||
producerBinding.unbind();
|
||||
}
|
||||
|
||||
|
||||
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870 for motivation for this test.
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
@@ -1459,9 +1570,15 @@ public class KafkaBinderTests extends
|
||||
producerBinding.unbind();
|
||||
}
|
||||
|
||||
@Test(expected = IllegalArgumentException.class)
|
||||
@Test
|
||||
public void testValidateKafkaTopicName() {
|
||||
KafkaTopicUtils.validateTopicName("foo:bar");
|
||||
try {
|
||||
KafkaTopicUtils.validateTopicName("foo:bar");
|
||||
fail("Expecting IllegalArgumentException");
|
||||
}
|
||||
catch (Exception e) {
|
||||
// TODO: handle exception
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -1599,7 +1716,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testSendAndReceiveMultipleTopics() throws Exception {
|
||||
public void testSendAndReceiveMultipleTopics(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
|
||||
DirectChannel moduleOutputChannel1 = createBindableChannel("output1",
|
||||
@@ -1760,7 +1877,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testTwoRequiredGroups() throws Exception {
|
||||
public void testTwoRequiredGroups(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
|
||||
@@ -1810,7 +1927,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testPartitionedModuleSpEL() throws Exception {
|
||||
public void testPartitionedModuleSpEL(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
@@ -1933,7 +2050,7 @@ public class KafkaBinderTests extends
|
||||
}
|
||||
|
||||
@Test
|
||||
@Override
|
||||
// @Override
|
||||
@SuppressWarnings({ "unchecked", "rawtypes" })
|
||||
public void testPartitionedModuleJava() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
@@ -2021,7 +2138,7 @@ public class KafkaBinderTests extends
|
||||
@Test
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testAnonymousGroup() throws Exception {
|
||||
public void testAnonymousGroup(TestInfo testInfo) throws Exception {
|
||||
Binder binder = getBinder();
|
||||
BindingProperties producerBindingProperties = createProducerBindingProperties(
|
||||
createProducerProperties());
|
||||
@@ -2888,14 +3005,13 @@ public class KafkaBinderTests extends
|
||||
consumerProperties.setInstanceCount(3);
|
||||
consumerProperties.setInstanceIndex(2);
|
||||
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
|
||||
expectedProvisioningException.expect(ProvisioningException.class);
|
||||
expectedProvisioningException.expectMessage(
|
||||
"The number of expected partitions was: 3, but 1 has been found instead");
|
||||
Binding binding = binder.bindConsumer(testTopicName, "test", output,
|
||||
consumerProperties);
|
||||
if (binding != null) {
|
||||
binding.unbind();
|
||||
}
|
||||
Assertions.assertThatThrownBy(() -> {
|
||||
Binding binding = binder.bindConsumer(testTopicName, "test", output,
|
||||
consumerProperties);
|
||||
if (binding != null) {
|
||||
binding.unbind();
|
||||
}
|
||||
}).isInstanceOf(ProvisioningException.class);
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -2927,7 +3043,7 @@ public class KafkaBinderTests extends
|
||||
binding,
|
||||
"lifecycle.messageListenerContainer.containerProperties",
|
||||
ContainerProperties.class);
|
||||
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitionsToAssign();
|
||||
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitions();
|
||||
assertThat(listenedPartitions).hasSize(2);
|
||||
assertThat(listenedPartitions).contains(
|
||||
new TopicPartitionOffset(testTopicName, 2),
|
||||
@@ -3290,7 +3406,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(
|
||||
"testSendAndReceiveWithMixedMode", "false",
|
||||
embeddedKafka.getEmbeddedKafka());
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
@@ -3335,7 +3451,7 @@ public class KafkaBinderTests extends
|
||||
"pollable,anotherOne", "group-polledConsumer", inboundBindTarget,
|
||||
consumerProps);
|
||||
Map<String, Object> producerProps = KafkaTestUtils
|
||||
.producerProps(embeddedKafka.getEmbeddedKafka());
|
||||
.producerProps(embeddedKafka);
|
||||
KafkaTemplate template = new KafkaTemplate(
|
||||
new DefaultKafkaProducerFactory<>(producerProps));
|
||||
template.send("pollable", "testPollable");
|
||||
@@ -3386,7 +3502,7 @@ public class KafkaBinderTests extends
|
||||
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
|
||||
"pollableRequeue", "group", inboundBindTarget, properties);
|
||||
Map<String, Object> producerProps = KafkaTestUtils
|
||||
.producerProps(embeddedKafka.getEmbeddedKafka());
|
||||
.producerProps(embeddedKafka);
|
||||
KafkaTemplate template = new KafkaTemplate(
|
||||
new DefaultKafkaProducerFactory<>(producerProps));
|
||||
template.send("pollableRequeue", "testPollable");
|
||||
@@ -3422,7 +3538,7 @@ public class KafkaBinderTests extends
|
||||
properties.setBackOffInitialInterval(0);
|
||||
properties.getExtension().setEnableDlq(true);
|
||||
Map<String, Object> producerProps = KafkaTestUtils
|
||||
.producerProps(embeddedKafka.getEmbeddedKafka());
|
||||
.producerProps(embeddedKafka);
|
||||
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
|
||||
"pollableDlq", "group-pcWithDlq", inboundBindTarget, properties);
|
||||
KafkaTemplate template = new KafkaTemplate(
|
||||
@@ -3441,11 +3557,11 @@ public class KafkaBinderTests extends
|
||||
assertThat(e.getCause().getMessage()).isEqualTo("test DLQ");
|
||||
}
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false",
|
||||
embeddedKafka.getEmbeddedKafka());
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
ConsumerFactory cf = new DefaultKafkaConsumerFactory<>(consumerProps);
|
||||
Consumer consumer = cf.createConsumer();
|
||||
embeddedKafka.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer,
|
||||
embeddedKafka.consumeFromAnEmbeddedTopic(consumer,
|
||||
"error.pollableDlq.group-pcWithDlq");
|
||||
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer,
|
||||
"error.pollableDlq.group-pcWithDlq");
|
||||
@@ -3460,7 +3576,7 @@ public class KafkaBinderTests extends
|
||||
public void testTopicPatterns() throws Exception {
|
||||
try (AdminClient admin = AdminClient.create(
|
||||
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
|
||||
embeddedKafka.getBrokersAsString()))) {
|
||||
admin.createTopics(Collections
|
||||
.singletonList(new NewTopic("topicPatterns.1", 1, (short) 1))).all()
|
||||
.get();
|
||||
@@ -3479,7 +3595,7 @@ public class KafkaBinderTests extends
|
||||
"topicPatterns\\..*", "testTopicPatterns", moduleInputChannel,
|
||||
consumerProperties);
|
||||
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(
|
||||
KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka()));
|
||||
KafkaTestUtils.producerProps(embeddedKafka));
|
||||
KafkaTemplate template = new KafkaTemplate(pf);
|
||||
template.send("topicPatterns.1", "foo");
|
||||
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
|
||||
@@ -3489,11 +3605,12 @@ public class KafkaBinderTests extends
|
||||
}
|
||||
}
|
||||
|
||||
@Test(expected = TopicExistsException.class)
|
||||
@Test
|
||||
public void testSameTopicCannotBeProvisionedAgain() throws Throwable {
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
try (AdminClient admin = AdminClient.create(
|
||||
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
|
||||
embeddedKafka.getBrokersAsString()))) {
|
||||
admin.createTopics(Collections
|
||||
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1))).all()
|
||||
.get();
|
||||
@@ -3501,11 +3618,13 @@ public class KafkaBinderTests extends
|
||||
admin.createTopics(Collections
|
||||
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1)))
|
||||
.all().get();
|
||||
fail("Expecting TopicExistsException");
|
||||
}
|
||||
catch (Exception ex) {
|
||||
assertThat(ex.getCause() instanceof TopicExistsException).isTrue();
|
||||
throw ex.getCause();
|
||||
latch.countDown();
|
||||
}
|
||||
latch.await(1, TimeUnit.SECONDS);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3707,7 +3826,7 @@ public class KafkaBinderTests extends
|
||||
input.setBeanName(name + ".in");
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer(name + ".0", name, input, consumerProperties);
|
||||
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
|
||||
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
|
||||
template.send(MessageBuilder.withPayload("internalHeaderPropagation")
|
||||
.setHeader(KafkaHeaders.TOPIC, name + ".0")
|
||||
@@ -3721,7 +3840,7 @@ public class KafkaBinderTests extends
|
||||
output.send(consumed);
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(name, "false",
|
||||
embeddedKafka.getEmbeddedKafka());
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
|
||||
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,18 +16,28 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.bootstrap;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.function.Function;
|
||||
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.junit.jupiter.api.AfterAll;
|
||||
import org.junit.jupiter.api.BeforeAll;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.springframework.boot.builder.SpringApplicationBuilder;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
|
||||
import org.springframework.kafka.test.context.EmbeddedKafka;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.assertThatCode;
|
||||
@@ -35,20 +45,53 @@ import static org.assertj.core.api.Assertions.assertThatCode;
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
@EmbeddedKafka(count = 1, controlledShutdown = true, partitions = 10, topics = "outputTopic")
|
||||
public class KafkaBinderMeterRegistryTest {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
|
||||
private static EmbeddedKafkaBroker embeddedKafka;
|
||||
|
||||
private static Consumer<String, String> consumer;
|
||||
|
||||
@BeforeAll
|
||||
public static void setup() {
|
||||
embeddedKafka = EmbeddedKafkaCondition.getBroker();
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "outputTopic");
|
||||
}
|
||||
|
||||
@AfterAll
|
||||
public static void tearDown() {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testMetricsWithSingleBinder() {
|
||||
public void testMetricsWithSingleBinder() throws Exception {
|
||||
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
|
||||
.web(WebApplicationType.NONE)
|
||||
.run("--spring.cloud.stream.bindings.uppercase-in-0.destination=inputTopic",
|
||||
"--spring.cloud.stream.bindings.uppercase-in-0.group=inputGroup",
|
||||
"--spring.cloud.stream.bindings.uppercase-out-0.destination=outputTopic",
|
||||
"--spring.cloud.stream.kafka.binder.brokers" + "="
|
||||
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("inputTopic");
|
||||
template.sendDefault("foo");
|
||||
|
||||
// Forcing the retrieval of the data on the outbound so that the producer factory has
|
||||
// a chance to add the micrometer listener properly. Only on the first send, binder's
|
||||
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
|
||||
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
|
||||
|
||||
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
|
||||
assertMeterRegistry(meterRegistry);
|
||||
@@ -68,10 +111,22 @@ public class KafkaBinderMeterRegistryTest {
|
||||
"--spring.cloud.stream.binders.kafka2.type=kafka",
|
||||
"--spring.cloud.stream.binders.kafka1.environment"
|
||||
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
|
||||
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
|
||||
+ embeddedKafka.getBrokersAsString(),
|
||||
"--spring.cloud.stream.binders.kafka2.environment"
|
||||
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
|
||||
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("inputTopic");
|
||||
template.sendDefault("foo");
|
||||
|
||||
// Forcing the retrieval of the data on the outbound so that the producer factory has
|
||||
// a chance to add the micrometer listener properly. Only on the first send, binder's
|
||||
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
|
||||
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
|
||||
|
||||
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
|
||||
assertMeterRegistry(meterRegistry);
|
||||
@@ -87,10 +142,10 @@ public class KafkaBinderMeterRegistryTest {
|
||||
.tag("topic", "inputTopic").gauge().value()).isNotNull();
|
||||
|
||||
// assert consumer metrics
|
||||
assertThatCode(() -> meterRegistry.get("kafka.consumer.connection.count").meter()).doesNotThrowAnyException();
|
||||
assertThatCode(() -> meterRegistry.get("kafka.consumer.fetch.manager.fetch.total").meter()).doesNotThrowAnyException();
|
||||
|
||||
// assert producer metrics
|
||||
assertThatCode(() -> meterRegistry.get("kafka.producer.connection.count").meter()).doesNotThrowAnyException();
|
||||
assertThatCode(() -> meterRegistry.get("kafka.producer.io.ratio").meter()).doesNotThrowAnyException();
|
||||
}
|
||||
|
||||
@SpringBootApplication
|
||||
|
||||
Reference in New Issue
Block a user