Compare commits

..

17 Commits

Author SHA1 Message Date
buildmaster
593cf44477 Update SNAPSHOT to 3.1.4 2021-09-21 06:23:25 +00:00
Soby Chacko
265b881d35 Update SI Kafka to 5.4.10 Release 2021-09-20 18:38:25 -04:00
Soby Chacko
ac2ecf1a54 GH-1145: Remove destroying producer factory
Remove the un-ncessary call to destroy the producer when checking for partitions.
This way, the producer is cached and reused at the first time data is produced.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1145
2021-09-20 18:38:25 -04:00
Gary Russell
0dd86b60c1 Fix Doc Typos for KafkaBindingRebalanceListener 2021-09-20 18:38:25 -04:00
Soby Chacko
2fc335c597 Fix wrong property in docs
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1137
2021-09-20 18:38:25 -04:00
Soby Chacko
bc62f76756 Consumer/Producer prefix in Kafka Streams binder (#1131)
* Consumer/Producer prefix in Kafka Streams binder

Kafka Streams allows the applications to separate the consumer and producer
properties using consumer/producer prefixes. Add these prefixes automatically
if they are missing from the application.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1065

* Addressing PR review comments
2021-09-20 18:38:25 -04:00
Soby Chacko
e537020f37 Kafka Streams binding lifecycle changes
Start Kafka Streams bindgings only when they are not running.
Similarly, stop them only if they are running.

Without these guards in the bindings for KStream, KTable and GlobalKTable,
it may cause NPE's due to the backing concurrent collections in KafkaStreamsRegistry
not finding the proper KafkaStreams object, especially when the StreamsBuilderFactory
bean is already stopped through the binder provided manager.
2021-09-20 18:38:25 -04:00
Soby Chacko
20d823eeec GH-1112: Fix OOM in DlqSender.sendToDlq
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1112
2021-09-20 18:38:25 -04:00
Soby Chacko
0034f085a2 GH-1129: Kafka Binder Metrics Improvements
Avoid blocking committed() call in KafkaBinderMetrics in a loop for
each topic partition.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1129
2021-09-20 18:38:25 -04:00
Soby Chacko
ef8d358f1d Document Kafka Streams and Sleuth integration
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1102
2021-09-20 18:38:25 -04:00
Derek Eskens
6e2bc7bc87 Fixing consumer config typo in overview.adoc
ConsumerConfigCustomizer is the correct name of the interface.
2021-08-23 19:31:07 -04:00
Soby Chacko
389b45d6bd Recycle KafkaStreams Objects
In the event Kafka Streams bindings are restarted (stop/start)
using the actuator bindings endpoints, the underlying KafkaStreams
objects are not recycled. After restarting, it still sees the previous
KafkaStreams object. Addressing this issue.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1119
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1120
2021-08-12 16:04:38 -04:00
Soby Chacko
e6602f235e Docs updates for Kakfa Streams skipAndContinue 2021-08-12 14:44:05 -04:00
Soby Chacko
f6b2021310 Update Spring Kafka/Spring Integration Kafka
Spring Kafka - 2.6.10
Spring Integration Kafka - 5.4.8
2021-07-26 15:06:49 -04:00
Oleg Zhurakousky
6a838acca4 Bumpd s-c-build to 3.0.4-SNAPSHOT 2021-07-20 10:33:57 +02:00
Soby Chacko
cde5b7b80b GH-1085: Allow KTable binding on the outbound
At the moment, Kafka Streams binder only allows KStream bindings on the outbound.
There is a delegation mechanism in which we stil can use KStream for output binding
while allowing the applications to provide a KTable type as the function return type.

Update docs.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1085

Resolves #1105
2021-07-16 09:47:37 +02:00
Soby Chacko
2495e6e44e Revert "JUnit 5 migration related to boot 2.6"
This reverts commit 1a45ff797a.
2021-07-15 12:54:49 -04:00
41 changed files with 1582 additions and 1149 deletions

View File

@@ -342,12 +342,6 @@ When using a transactional binder, the offset of a recovered record (e.g. when r
Setting this property to `false` suppresses committing the offset of recovered record.
+
Default: true.
commonErrorHandlerBeanName::
`CommonErrorHandler` bean name to use per consumer binding.
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
@@ -469,18 +463,18 @@ Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
+
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
+
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
+
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
Default: null.
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
</parent>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>

View File

@@ -277,7 +277,7 @@ public Function<KTable<String, String>, KStream<String, String>> bar() {
===== Multiple Output Bindings
Kafka Streams allows writing outbound data into multiple topics. This feature is known as branching in Kafka Streams.
Kafka Streams allows to write outbound data into multiple topics. This feature is known as branching in Kafka Streams.
When using multiple output bindings, you need to provide an array of KStream (`KStream[]`) as the outbound return type.
Here is an example:
@@ -291,30 +291,21 @@ public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
----
The programming model remains the same, however the outbound parameterized type is `KStream[]`.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively for the function above.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array as three.
Note that in this example, we provide a `noDefaultBranch()`; if we have used `defaultBranch()` instead, that would have required an extra output binding, essentially returning a `KStream` array of length four.
The default output binding names are `process-out-0`, `process-out-1`, `process-out-2` respectively.
The reason why the binder generates three output bindings is because it detects the length of the returned `KStream` array.
===== Summary of Function based Programming Styles for Kafka Streams
@@ -1718,9 +1709,8 @@ spring.cloud.stream.bindings.enrichOrder-out-0.binder=kafka1 #kstream
=== State Cleanup
By default, no local state is cleaned up when the binding is stopped.
This is the same behavior effective from Spring Kafka version 2.7.
See https://docs.spring.io/spring-kafka/reference/html/#streams-config[Spring Kafka documentation] for more details.
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
=== Kafka Streams topology visualization
@@ -1880,102 +1870,6 @@ When there are multiple bindings present on a single function, invoking these op
This is because all the bindings on a single function are backed by the same `StreamsBuilderFactoryBean`.
Therefore, for the function above, either `function-in-0` or `function-out-0` will work.
=== Manually starting Kafka Streams processors
Spring Cloud Stream Kafka Streams binder offers an abstraction called `StreamsBuilderFactoryManager` on top of the `StreamsBuilderFactoryBean` from Spring for Apache Kafka.
This manager API is used for controlling the multiple `StreamsBuilderFactoryBean` per processor in a binder based application.
Therefore, when using the binder, if you manually want to control the auto starting of the various `StreamsBuilderFactoryBean` objects in the application, you need to use `StreamsBuilderFactoryManager`.
You can use the property `spring.kafka.streams.auto-startup` and set this to `false` in order to turn off auto starting of the processors.
Then, in the application, you can use something as below to start the processors using `StreamsBuilderFactoryManager`.
```
@Bean
public ApplicationRunner runner(StreamsBuilderFactoryManager sbfm) {
return args -> {
sbfm.start();
};
}
```
This feature is handy, when you want your application to start in the main thread and let Kafka Streams processors start separately.
For example, when you have a large state store that needs to be restored, if the processors are started normally as is the default case, this may block your application to start.
If you are using some sort of liveness probe mechanism (for example on Kubernetes), it may think that the application is down and attempt a restart.
In order to correct this, you can set `spring.kafka.streams.auto-startup` to `false` and follow the approach above.
Keep in mind that, when using the Spring Cloud Stream binder, you are not directly dealing with `StreamsBuilderFactoryBean` from Spring for Apache Kafka, rather `StreamsBuilderFactoryManager`, as the `StreamsBuilderFactoryBean` objects are internally managed by the binder.
=== Manually starting Kafka Streams processors selectively
While the approach laid out above will unconditionally apply auto start `false` to all the Kafka Streams processors in the application through `StreamsBuilderFactoryManager`, it is often desirable that only individually selected Kafka Streams processors are not auto started.
For instance, let us assume that you have three different functions (processors) in your application and for one of the processors, you do not want to start it as part of the application startup.
Here is an example of such a situation.
```
@Bean
public Function<KStream<?, ?>, KStream<?, ?>> process1() {
}
@Bean
public Consumer<KStream<?, ?>> process2() {
}
@Bean
public BiFunction<KStream<?, ?>, KTable<?, ?>, KStream<?, ?>> process3() {
}
```
In this scenario above, if you set `spring.kafka.streams.auto-startup` to `false`, then none of the processors will auto start during the application startup.
In that case, you have to programmatically start them as described above by calling `start()` on the underlying `StreamsBuilderFactoryManager`.
However, if we have a use case to selectively disable only one processor, then you have to set `auto-startup` on the individual binding for that processor.
Let us assume that we don't want our `process3` function to auto start.
This is a `BiFunction` with two input bindings - `process3-in-0` and `process3-in-1`.
In order to avoid auto start for this processor, you can pick any of these input bindings and set `auto-startup` on them.
It does not matter which binding you pick; if you wish, you can set `auto-startup` to `false` on both of them, but one will be sufficient.
Because they share the same factory bean, you don't have to set autoStartup to false on both bindings, but it probably makes sense to do so, for clarity.
Here is the Spring Cloud Stream property that you can use to disable auto startup for this processor.
```
spring.cloud.stream.bindings.process3-in-0.consumer.auto-startup: false
```
or
```
spring.cloud.stream.bindings.process3-in-1.consumer.auto-startup: false
```
Then, you can manually start the processor either using the REST endpoint or using the `BindingsEndpoint` API as shown below.
For this, you need to ensure that you have the Spring Boot actuator dependency on the classpath.
```
curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/process3-in-0
```
or
```
@Autowired
BindingsEndpoint endpoint;
@Bean
public ApplicationRunner runner() {
return args -> {
endpoint.changeState("process3-in-0", State.STARTED);
};
}
```
See https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#binding_visualization_control[this section] from the reference docs for more details on this mechanism.
NOTE: When controlling the bindings by disabling `auto-startup` as described in this section, please note that this is only available for consumer bindings.
In other words, if you use the producer binding, `process3-out-0`, that does not have any effect in terms of disabling the auto starting of the processor, although this producer binding uses the same `StreamsBuilderFactoryBean` as the consumer bindings.
=== Tracing using Spring Cloud Sleuth
When Spring Cloud Sleuth is on the classpath of a Spring Cloud Stream Kafka Streams binder based application, both its consumer and producer are automatically instrumented with tracing information.
@@ -2048,8 +1942,6 @@ For common configuration options and properties pertaining to binder, refer to t
==== Kafka Streams Binder Properties
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
Any Kafka binder provided properties re-used in Kafka Streams binder must be prefixed with `spring.cloud.stream.kafka.streams.binder` instead of `spring.cloud.stream.kafka.binder`.
The only exception to this rule is when defining the Kafka bootstrap server property in which case either prefix works.
configuration::
Map with a key/value pair containing properties pertaining to Apache Kafka Streams API.

View File

@@ -321,12 +321,6 @@ When using a transactional binder, the offset of a recovered record (e.g. when r
Setting this property to `false` suppresses committing the offset of recovered record.
+
Default: true.
commonErrorHandlerBeanName::
`CommonErrorHandler` bean name to use per consumer binding.
When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder.
This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler.
+
Default: none.
[[reset-offsets]]
==== Resetting Offsets
@@ -448,18 +442,18 @@ Default: none (the binder-wide default of -1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
+
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
+
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
+
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
Default: null.
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.

188
pom.xml
View File

@@ -2,30 +2,21 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>3.1.0-M2</version>
<version>3.0.4</version>
<relativePath />
</parent>
<scm>
<url>https://github.com/spring-cloud/spring-cloud-stream-binder-kafka</url>
<connection>scm:git:git://github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</connection>
<developerConnection>
scm:git:ssh://git@github.com/spring-cloud/spring-cloud-stream-binder-kafka.git
</developerConnection>
<tag>HEAD</tag>
</scm>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.8.0-M3</spring-kafka.version>
<spring-integration-kafka.version>5.5.2</spring-integration-kafka.version>
<kafka.version>2.8.0</kafka.version>
<spring-cloud-schema-registry.version>1.2.0-M2</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.2.0-M2</spring-cloud-stream.version>
<spring-kafka.version>2.6.10</spring-kafka.version>
<spring-integration-kafka.version>5.4.10</spring-integration-kafka.version>
<kafka.version>2.6.2</kafka.version>
<spring-cloud-schema-registry.version>1.1.4</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.4</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
@@ -36,7 +27,7 @@
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
<module>docs</module>
</modules>
</modules>
<dependencyManagement>
<dependencies>
@@ -148,6 +139,10 @@
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
@@ -170,16 +165,6 @@
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<compilerArgument>-parameters</compilerArgument>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
@@ -190,91 +175,74 @@
<profiles>
<profile>
<id>spring</id>
</profile>
<profile>
<id>coverage</id>
<activation>
<property>
<name>env.TRAVIS</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.9</version>
<executions>
<execution>
<id>agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
</repository>
<repository>
<id>rsocket-snapshots</id>
<name>RSocket Snapshots</name>
<url>https://oss.jfrog.org/oss-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
</pluginRepository>
</pluginRepositories>
<reporting>
<plugins>
<plugin>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -210,12 +210,6 @@ public class KafkaConsumerProperties {
*/
private boolean txCommitRecovered = true;
/**
* CommonErrorHandler bean name per consumer binding.
* @since 3.2
*/
private String commonErrorHandlerBeanName;
/**
* @return if each record needs to be acknowledged.
*
@@ -535,11 +529,4 @@ public class KafkaConsumerProperties {
this.txCommitRecovered = txCommitRecovered;
}
public String getCommonErrorHandlerBeanName() {
return commonErrorHandlerBeanName;
}
public void setCommonErrorHandlerBeanName(String commonErrorHandlerBeanName) {
this.commonErrorHandlerBeanName = commonErrorHandlerBeanName;
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
</parent>
<properties>

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2021 the original author or authors.
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -161,8 +161,6 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
@@ -175,8 +173,6 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
arguments[index] = table;
}
}
@@ -271,9 +267,9 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
Assert.state(!bindingConfig.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG),
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG + " cannot be overridden at the binding level; "
+ "use multiple binders instead");
// We will only add the per binding configuration to the current streamConfiguration and not the global one.
streamConfigGlobalProperties.putAll(bindingConfig);
streamConfiguration
.putAll(bindingConfig);
.putAll(extendedConsumerProperties.getConfiguration());
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.

View File

@@ -1,51 +0,0 @@
/*
* Copyright 2018-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
import org.springframework.cloud.stream.config.BindingHandlerAdvise.MappingsProvider;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* {@link EnableAutoConfiguration Auto-configuration} for extended binding metadata for Kafka Streams.
*
* @author Chris Bono
* @since 3.2
*/
@Configuration(proxyBeanMethods = false)
public class ExtendedBindingHandlerMappingsProviderAutoConfiguration {
@Bean
public MappingsProvider kafkaStreamsExtendedPropertiesDefaultMappingsProvider() {
return () -> {
Map<ConfigurationPropertyName, ConfigurationPropertyName> mappings = new HashMap<>();
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.bindings"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams.default"));
return mappings;
};
}
}

View File

@@ -412,8 +412,8 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue catalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
@Nullable KafkaStreamsMicrometerListener listener, KafkaProperties kafkaProperties) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener, kafkaProperties);
@Nullable KafkaStreamsMicrometerListener listener) {
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
}
@Bean

View File

@@ -54,8 +54,6 @@ public class KafkaStreamsBindingInformationCatalogue {
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
private final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> consumerPropertiesPerSbfb = new HashMap<>();
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
@@ -139,19 +137,11 @@ public class KafkaStreamsBindingInformationCatalogue {
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
}
void addConsumerPropertiesPerSbfb(StreamsBuilderFactoryBean streamsBuilderFactoryBean, ConsumerProperties consumerProperties) {
this.consumerPropertiesPerSbfb.computeIfAbsent(streamsBuilderFactoryBean, k -> new ArrayList<>());
this.consumerPropertiesPerSbfb.get(streamsBuilderFactoryBean).add(consumerProperties);
}
public Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> getConsumerPropertiesPerSbfb() {
return this.consumerPropertiesPerSbfb;
}
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
return this.streamsBuilderFactoryBeanPerBinding;
}
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
this.outboundKStreamResolvables.put(key, outboundResolvable);
}

View File

@@ -529,8 +529,6 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(input));
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
final Class<?> valueClass =

View File

@@ -17,12 +17,12 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
@@ -37,9 +37,9 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
*/
public class KafkaStreamsRegistry {
private final Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new ConcurrentHashMap<>();
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
private final Set<KafkaStreams> kafkaStreams = ConcurrentHashMap.newKeySet();
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
Set<KafkaStreams> getKafkaStreams() {
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();

View File

@@ -320,9 +320,6 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(inboundName, streamsBuilderFactoryBean);
this.kafkaStreamsBindingInformationCatalogue.addConsumerPropertiesPerSbfb(streamsBuilderFactoryBean,
bindingServiceProperties.getConsumerProperties(inboundName));
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {

View File

@@ -16,15 +16,9 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.context.SmartLifecycle;
import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
@@ -44,7 +38,7 @@ import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
*
* @author Soby Chacko
*/
public class StreamsBuilderFactoryManager implements SmartLifecycle {
class StreamsBuilderFactoryManager implements SmartLifecycle {
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
@@ -56,23 +50,19 @@ public class StreamsBuilderFactoryManager implements SmartLifecycle {
private volatile boolean running;
private final KafkaProperties kafkaProperties;
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener,
KafkaProperties kafkaProperties) {
KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
KafkaStreamsMicrometerListener listener) {
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
this.listener = listener;
this.kafkaProperties = kafkaProperties;
}
@Override
public boolean isAutoStartup() {
return this.kafkaProperties == null || this.kafkaProperties.getStreams().isAutoStartup();
return true;
}
@Override
@@ -89,24 +79,13 @@ public class StreamsBuilderFactoryManager implements SmartLifecycle {
try {
Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = this.kafkaStreamsBindingInformationCatalogue
.getStreamsBuilderFactoryBeans();
int n = 0;
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
if (this.listener != null) {
streamsBuilderFactoryBean.addListener(this.listener);
}
// By default, we shut down the client if there is an uncaught exception in the application.
// Users can override this by customizing SBFB. See this issue for more details:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1110
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
// Starting the stream.
final Map<StreamsBuilderFactoryBean, List<ConsumerProperties>> bindingServicePropertiesPerSbfb =
this.kafkaStreamsBindingInformationCatalogue.getConsumerPropertiesPerSbfb();
final List<ConsumerProperties> consumerProperties = bindingServicePropertiesPerSbfb.get(streamsBuilderFactoryBean);
final boolean autoStartupDisabledOnAtLeastOneConsumerBinding = consumerProperties.stream().anyMatch(consumerProperties1 -> !consumerProperties1.isAutoStartup());
if (!autoStartupDisabledOnAtLeastOneConsumerBinding) {
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
streamsBuilderFactoryBean.start();
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
}
if (this.kafkaStreamsBinderMetrics != null) {
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);

View File

@@ -19,11 +19,9 @@ package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
@@ -109,9 +107,6 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
final String definition = streamFunctionProperties.getDefinition();
final String[] functionUnits = StringUtils.hasText(definition) ? definition.split(";") : new String[]{};
final Set<String> kafkaStreamsMethodNames = new HashSet<>(kafkaStreamsOnlyResolvableTypes.keySet());
kafkaStreamsMethodNames.addAll(this.resolvableTypeMap.keySet());
if (functionUnits.length == 0) {
for (String s : getResolvableTypes().keySet()) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(s)};
@@ -128,30 +123,21 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
ResolvableType[] resolvableTypes = new ResolvableType[composedFunctions.length];
int i = 0;
boolean nonKafkaStreamsFunctionsFound = false;
for (String split : composedFunctions) {
derivedNameFromComposed = derivedNameFromComposed.concat(split);
resolvableTypes[i++] = getResolvableTypes().get(split);
if (!kafkaStreamsMethodNames.contains(split)) {
nonKafkaStreamsFunctionsFound = true;
break;
}
}
if (!nonKafkaStreamsFunctionsFound) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, derivedNameFromComposed, resolvableTypes, rootBeanDefinition);
}
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, derivedNameFromComposed, resolvableTypes, rootBeanDefinition);
}
else {
// Ensure that the function unit is a Kafka Streams function
if (kafkaStreamsMethodNames.contains(functionUnit)) {
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(functionUnit)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, functionUnit, resolvableTypes, rootBeanDefinition);
}
ResolvableType[] resolvableTypes = new ResolvableType[]{getResolvableTypes().get(functionUnit)};
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
registerKakaStreamsProxyFactory(registry, functionUnit, resolvableTypes, rootBeanDefinition);
}
}
}

View File

@@ -73,16 +73,15 @@ public class KafkaStreamsFunctionProcessorInvoker {
}
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(derivedNameFromComposed[0])).findFirst();
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(composedFunctions[0]),
derivedNameFromComposed[0], kafkaStreamsBindableProxyFactory, methods.get(derivedNameFromComposed[0]), resolvableTypeMap.get(composedFunctions[composedFunctions.length - 1]), composedFunctions));
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(composedFunctions[0]),
derivedNameFromComposed[0], proxyFactory.get(), methods.get(derivedNameFromComposed[0]), resolvableTypeMap.get(composedFunctions[composedFunctions.length - 1]), composedFunctions);
}
else {
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(functionUnit)).findFirst();
proxyFactory.ifPresent(kafkaStreamsBindableProxyFactory ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(functionUnit), functionUnit,
kafkaStreamsBindableProxyFactory, methods.get(functionUnit), null));
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(resolvableTypeMap.get(functionUnit), functionUnit,
proxyFactory.get(), methods.get(functionUnit), null);
}
}
}

View File

@@ -1,5 +1,4 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.ExtendedBindingHandlerMappingsProviderAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpointAutoConfiguration

View File

@@ -1,83 +0,0 @@
/*
* Copyright 2019-2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.junit.jupiter.api.Test;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.runner.ApplicationContextRunner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import static org.assertj.core.api.Assertions.assertThat;
/**
* Tests for {@link ExtendedBindingHandlerMappingsProviderAutoConfiguration}.
*/
class ExtendedBindingHandlerMappingsProviderAutoConfigurationTests {
private final ApplicationContextRunner contextRunner = new ApplicationContextRunner()
.withUserConfiguration(KafkaStreamsTestApp.class)
.withPropertyValues(
"spring.cloud.stream.kafka.streams.default.consumer.application-id: testApp123",
"spring.cloud.stream.kafka.streams.default.consumer.consumed-as: default-consumer",
"spring.cloud.stream.kafka.streams.default.consumer.materialized-as: default-materializer",
"spring.cloud.stream.kafka.streams.default.producer.produced-as: default-producer",
"spring.cloud.stream.kafka.streams.default.producer.key-serde: default-foo");
@Test
void defaultsUsedWhenNoCustomBindingProperties() {
this.contextRunner.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "default-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "default-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@Test
void defaultsRespectedWhenCustomBindingProperties() {
this.contextRunner
.withPropertyValues(
"spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.consumed-as: custom-consumer",
"spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.produced-as: custom-producer")
.run((context) -> {
assertThat(context)
.hasNotFailed()
.hasSingleBean(KafkaStreamsExtendedBindingProperties.class);
KafkaStreamsExtendedBindingProperties extendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
assertThat(extendedBindingProperties.getExtendedConsumerProperties("process-in-0"))
.hasFieldOrPropertyWithValue("applicationId", "testApp123")
.hasFieldOrPropertyWithValue("consumedAs", "custom-consumer")
.hasFieldOrPropertyWithValue("materializedAs", "default-materializer");
assertThat(extendedBindingProperties.getExtendedProducerProperties("process-out-0"))
.hasFieldOrPropertyWithValue("producedAs", "custom-producer")
.hasFieldOrPropertyWithValue("keySerde", "default-foo");
});
}
@EnableAutoConfiguration
static class KafkaStreamsTestApp {
}
}

View File

@@ -37,7 +37,6 @@ import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.mockito.Mockito;
@@ -52,7 +51,6 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -123,7 +121,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
}
@Test
@Ignore
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -223,11 +220,6 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
return new Foo(interactiveQueryService);
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
static class Foo {
InteractiveQueryService interactiveQueryService;

View File

@@ -16,8 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import java.util.Map;
import java.util.Properties;
import java.util.function.Consumer;
import org.apache.kafka.common.security.JaasUtils;
@@ -33,11 +31,8 @@ import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
/**
* @author Soby Chacko
*/
@@ -101,43 +96,6 @@ public class KafkaStreamsBinderBootstrapTest {
applicationContext.close();
}
@Test
@SuppressWarnings("unchecked")
public void testStreamConfigGlobalProperties_GH1149() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(
SimpleKafkaStreamsApplication.class).web(WebApplicationType.NONE).run(
"--spring.cloud.function.definition=input1;input2;input3",
"--spring.cloud.stream.kafka.streams.bindings.input1-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foo",
"--spring.cloud.stream.kafka.streams.bindings.input2-in-0.consumer.configuration.spring.json.value.type.method=com.test.MyClass",
"--spring.cloud.stream.kafka.streams.bindings.input3-in-0.consumer.application-id"
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
// Make sure that global stream configs do not contain individual binding config set on second function.
assertThat(streamConfigGlobalProperties.containsKey("spring.json.value.type.method")).isFalse();
// Make sure that only input2 function gets the specific binding property set on it.
final StreamsBuilderFactoryBean input1SBFB = applicationContext.getBean("&stream-builder-input1", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration1 = input1SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration1.containsKey("spring.json.value.type.method")).isFalse();
final StreamsBuilderFactoryBean input2SBFB = applicationContext.getBean("&stream-builder-input2", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration2 = input2SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration2.containsKey("spring.json.value.type.method")).isTrue();
final StreamsBuilderFactoryBean input3SBFB = applicationContext.getBean("&stream-builder-input3", StreamsBuilderFactoryBean.class);
final Properties streamsConfiguration3 = input3SBFB.getStreamsConfiguration();
assertThat(streamsConfiguration3.containsKey("spring.json.value.type.method")).isFalse();
applicationContext.close();
}
@SpringBootApplication
static class SimpleKafkaStreamsApplication {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2021 the original author or authors.
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -49,9 +48,6 @@ import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KafkaStreamsBinderWordCountBranchesFunctionTests {
@ClassRule
@@ -183,30 +179,22 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
public static class WordCountProcessorApplication {
@Bean
@SuppressWarnings({"unchecked"})
@SuppressWarnings("unchecked")
public Function<KStream<Object, String>, KStream<?, WordCount>[]> process() {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input -> {
final Map<String, KStream<Object, WordCount>> stringKStreamMap = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.split()
.branch(isEnglish)
.branch(isFrench)
.branch(isSpanish)
.noDefaultBranch();
return stringKStreamMap.values().toArray(new KStream[0]);
};
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-branch"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}

View File

@@ -16,7 +16,6 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
@@ -52,7 +51,6 @@ import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
import org.springframework.cloud.stream.binder.kafka.streams.StreamsBuilderFactoryManager;
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
@@ -75,7 +73,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "counts-1", "counts-2", "counts-5", "counts-6");
"counts", "counts-1", "counts-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@@ -91,7 +89,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2", "counts-5", "counts-6");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
}
@AfterClass
@@ -178,23 +176,22 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() {
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-5",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-5",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-5", "counts-5");
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
}
}
@@ -206,7 +203,6 @@ public class KafkaStreamsBinderWordCountFunctionTests {
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.kafka.streams.binder.application-id=testKstreamWordCountFunctionWithCustomProducerStreamPartitioner",
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
@@ -238,90 +234,6 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@Test
public void testKstreamBinderAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.kafka.streams.auto-startup=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-3",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-3",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryManager streamsBuilderFactoryManager = context.getBean(StreamsBuilderFactoryManager.class);
assertThat(streamsBuilderFactoryManager.isAutoStartup()).isFalse();
assertThat(streamsBuilderFactoryManager.isRunning()).isFalse();
}
}
@Test
public void testKstreamIndividualBindingAutoStartup() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-4",
"--spring.cloud.stream.bindings.process-in-0.consumer.auto-startup=false",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-4",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean.isRunning()).isFalse();
streamsBuilderFactoryBean.start();
assertThat(streamsBuilderFactoryBean.isRunning()).isTrue();
}
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-6",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-6",
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words-6");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-6");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
private void receiveAndValidate(String in, String out) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
@@ -429,20 +341,4 @@ public class KafkaStreamsBinderWordCountFunctionTests {
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
}
}
@EnableAutoConfiguration
static class OutboundNullApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
return input -> input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foobar-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -41,13 +41,7 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -76,7 +70,7 @@ public class StreamToGlobalKTableFunctionTests {
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.function.bindings.process-in-0=order",
@@ -95,44 +89,7 @@ public class StreamToGlobalKTableFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-in-2.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("process-in-0");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("process-in-2");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,

View File

@@ -16,12 +16,10 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
@@ -40,28 +38,17 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.StreamJoined;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -186,8 +173,9 @@ public class StreamToTableJoinFunctionTests {
}
}
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
@@ -199,8 +187,6 @@ public class StreamToTableJoinFunctionTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
@@ -270,30 +256,6 @@ public class StreamToTableJoinFunctionTests {
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("process-in-1");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("process-out-0");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
}
finally {
consumer.close();
@@ -436,34 +398,6 @@ public class StreamToTableJoinFunctionTests {
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
/**
* Tuple for a region and its associated number of clicks.
*/
@@ -505,14 +439,9 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.reduce(Long::sum)
.toStream()));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -527,14 +456,9 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum, Materialized.as("CountClicks-" + UUID.randomUUID()))
.reduce(Long::sum)
.toStream());
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
@EnableAutoConfiguration
@@ -551,29 +475,4 @@ public class StreamToTableJoinFunctionTests {
}
}
@EnableAutoConfiguration
public static class TrivialKTableApp {
public java.util.function.Consumer<KTable<String, String>> process() {
return inputTable -> inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
@EnableAutoConfiguration
public static class JoinProcessor {
public BiConsumer<KStream<String, String>, KStream<String, String>> testProcessor() {
return (input1Stream, input2Stream) -> input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(Duration.ofMillis(5)),
StreamJoined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -25,7 +25,6 @@ import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.StreamsUncaughtExceptionHandler;
import org.apache.kafka.streams.kstream.KStream;
import org.assertj.core.util.Lists;
import org.junit.Assert;
@@ -46,9 +45,6 @@ import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.config.KafkaStreamsCustomizer;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanConfigurer;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -182,7 +178,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
embeddedKafka.consumeFromEmbeddedTopics(consumer, topics);
KafkaTestUtils.getRecords(consumer, 1000);
TimeUnit.SECONDS.sleep(5);
TimeUnit.SECONDS.sleep(2);
checkHealth(context, expected);
}
finally {
@@ -285,19 +281,6 @@ public class KafkaStreamsBinderHealthIndicatorTests {
});
}
@Bean
public StreamsBuilderFactoryBeanConfigurer customizer() {
return factoryBean -> {
factoryBean.setKafkaStreamsCustomizer(new KafkaStreamsCustomizer() {
@Override
public void customize(KafkaStreams kafkaStreams) {
kafkaStreams.setUncaughtExceptionHandler(exception ->
StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.SHUTDOWN_CLIENT);
}
});
};
}
}
public interface KafkaStreamsProcessorX {

View File

@@ -42,8 +42,6 @@ import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -165,15 +163,10 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.count(Materialized.as("WordCounts-tKWCWSIAP0")).toStream()
.count(Materialized.as("WordCounts")).toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key, value)));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(false, true);
}
}
static class WordCount {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2021 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,7 +21,6 @@ import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import java.util.Properties;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -43,6 +42,10 @@ import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
@@ -54,6 +57,7 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
@@ -62,11 +66,11 @@ import static org.assertj.core.api.Assertions.assertThat;
* @author Soby Chacko
* @author Gary Russell
*/
public class KafkaStreamsBinderTombstoneTests {
public class KafkaStreamsBinderWordCountIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts-1");
"counts", "counts-1");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@@ -81,7 +85,7 @@ public class KafkaStreamsBinderTombstoneTests {
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts-1");
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
}
@AfterClass
@@ -89,6 +93,31 @@ public class KafkaStreamsBinderTombstoneTests {
consumer.close();
}
@Test
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words", "counts");
}
}
@Test
public void testSendToTombstone()
throws Exception {
@@ -98,22 +127,24 @@ public class KafkaStreamsBinderTombstoneTests {
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.bindings.input.destination=words-1",
"--spring.cloud.stream.bindings.output.destination=counts-1",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=testKstreamWordCountWithInputBindingLevelApplicationId",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.bindings.process-in-0.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate("words-1", "counts-1");
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
assertThat(kafkaStreams).isNotNull();
// Ensure that concurrency settings are mapped to number of stream task
@@ -169,21 +200,26 @@ public class KafkaStreamsBinderTombstoneTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class WordCountProcessorApplication {
@Bean
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
@StreamListener
@SendTo("output")
public KStream<?, WordCount> process(
@Input("input") KStream<Object, String> input) {
return input -> input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("foo-WordCounts"))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
new Date(key.window().start()), new Date(key.window().end()))));
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))));
}
@Bean

View File

@@ -108,7 +108,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();
assertThat(cleanup.cleanupOnStop()).isFalse();
assertThat(cleanup.cleanupOnStop()).isTrue();
}
finally {
context.close();

View File

@@ -0,0 +1,146 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class OutboundValueNullSkippedConversionTest {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
// The following test verifies the fixes made for this issue:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
@Test
public void testOutboundNullValueIsHandledGracefully()
throws Exception {
SpringApplication app = new SpringApplication(
OutboundNullApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value() == null).isTrue();
}
finally {
pf.destroy();
}
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
static class OutboundNullApplication {
@StreamListener
@SendTo("output")
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
@Input("input") KStream<Object, String> input) {
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
.toStream()
.map((key, value) -> new KeyValue<>(null, null));
}
}
}

View File

@@ -0,0 +1,383 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToGlobalKTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("input");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-y");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer,
true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
customer.setName("customer-" + i);
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct,
true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
final Product product = new Product();
product.setName("product-" + i);
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
for (long i = 0; i < 5; i++) {
final Order order = new Order();
order.setCustomerId(i);
order.setProductId(i);
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
JsonDeserializer.class);
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders
.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i))
.isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i))
.isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
pfOrder.destroy();
consumer.close();
}
finally {
context.close();
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(
@Input("input") KStream<Long, Order> ordersStream,
@Input("input-x") GlobalKTable<Long, Customer> customers,
@Input("input-y") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
customers, (orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
return customerId;
}
public void setCustomerId(long customerId) {
this.customerId = customerId;
}
public long getProductId() {
return productId;
}
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class Product {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}
public Customer getCustomer() {
return customer;
}
public void setCustomer(Customer customer) {
this.customer = customer;
}
public Order getOrder() {
return order;
}
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
this.customer = customer;
this.order = order;
}
long productId() {
return order.getProductId();
}
}
}

View File

@@ -0,0 +1,497 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Serialized;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"output-topic-1", "output-topic-2", "user-clicks-2", "user-regions-2");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-1",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsProducerProperties producerProperties = (KafkaStreamsProducerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedProducerProperties("output");
String cleanupPolicyOutput = producerProperties.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyOutput).isEqualTo("compact");
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L), new KeyValue<>("bob", 4L),
new KeyValue<>("chao", 25L), new KeyValue<>("bob", 19L),
new KeyValue<>("dave", 56L), new KeyValue<>("eve", 78L),
new KeyValue<>("alice", 40L), new KeyValue<>("fang", 99L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 109L),
new KeyValue<>("asia", 124L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest()
throws Exception {
SpringApplication app = new SpringApplication(
CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-2");
// Produce data first to the input topic to test the startOffset setting on the
// binding (which is set to earliest below).
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L),
new KeyValue<>("alice", 100L), new KeyValue<>("alice", 100L));
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-2");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
// Thread.sleep(10000L);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=helloxyz-foobar",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
Thread.sleep(1000L);
// Input 2: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
"alice", "asia"), /* Alice lived in Asia originally... */
new KeyValue<>("bob", "americas"), new KeyValue<>("chao", "asia"),
new KeyValue<>("dave", "europe"), new KeyValue<>("alice",
"europe"), /* ...but moved to Europe some time later. */
new KeyValue<>("eve", "americas"), new KeyValue<>("fang", "asia"));
Map<String, Object> senderProps1 = KafkaTestUtils
.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(
senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-2");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks1 = Arrays.asList(
new KeyValue<>("bob", 4L), new KeyValue<>("chao", 25L),
new KeyValue<>("bob", 19L), new KeyValue<>("dave", 56L),
new KeyValue<>("eve", 78L), new KeyValue<>("fang", 99L));
for (KeyValue<String, Long> keyValue : userClicks1) {
template.sendDefault(keyValue.key, keyValue.value);
}
List<KeyValue<String, Long>> expectedClicksPerRegion = Arrays.asList(
new KeyValue<>("americas", 101L), new KeyValue<>("europe", 56L),
new KeyValue<>("asia", 124L),
// 1000 alice entries which were there in the topic before the
// consumer started.
// Since we set the startOffset to earliest for the topic, it will
// read them,
// but the join fails to associate with a valid region, thus UNKNOWN.
new KeyValue<>("UNKNOWN", 1000L));
// Verify that we receive the expected data
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<String, Long>> actualClicksPerRegion = new ArrayList<>();
do {
ConsumerRecords<String, Long> records = KafkaTestUtils
.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<String, Long> record : records) {
System.out.println("foobar: " + record.key() + "::" + record.value());
actualClicksPerRegion
.add(new KeyValue<>(record.key(), record.value()));
}
}
while (count < expectedClicksPerRegion.size()
&& (System.currentTimeMillis() - start) < 30000);
// TODO: Matched count is 3 and not 4 (expectedClicksPerRegion.size()) when running with full suite. Investigate why.
// TODO: This behavior is only observed after the Spring Kafka upgrade to 2.5.0 and kafka client to 2.5.
// TODO: Note that the test passes fine as a single test.
assertThat(count).matches(
matchedCount -> matchedCount == expectedClicksPerRegion.size() - 1 || matchedCount == expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).containsAnyElementsOf(expectedClicksPerRegion);
}
finally {
consumer.close();
}
}
@Test
public void testTrivialSingleKTableInputAsNonDeclarative() {
SpringApplication app = new SpringApplication(
TrivialKTableApp.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.application-id=" +
"testTrivialSingleKTableInputAsNonDeclarative");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
public static class CountClicksPerRegionApplication {
@StreamListener
@SendTo("output")
public KStream<String, Long> process(
@Input("input") KStream<String, Long> userClicksStream,
@Input("input-x") KTable<String, String> userRegionsTable) {
return userClicksStream
.leftJoin(userRegionsTable,
(clicks, region) -> new RegionWithClicks(
region == null ? "UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(
regionWithClicks.getRegion(), regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream();
}
//This forces the state stores to be cleaned up before running the test.
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@EnableAutoConfiguration
public static class TrivialKTableApp {
@StreamListener("input-y")
public void process(KTable<String, String> inputTable) {
inputTable.toStream().foreach((key, value) -> System.out.println("key : value " + key + " : " + value));
}
}
interface KafkaStreamsProcessorX extends KafkaStreamsProcessor {
@Input("input-x")
KTable<?, ?> inputX();
}
interface KafkaStreamsProcessorY {
@Input("input-y")
KTable<?, ?> inputY();
}
/**
* Tuple for a region and its associated number of clicks.
*/
private static final class RegionWithClicks {
private final String region;
private final long clicks;
RegionWithClicks(String region, long clicks) {
if (region == null || region.isEmpty()) {
throw new IllegalArgumentException("region must be set");
}
if (clicks < 0) {
throw new IllegalArgumentException("clicks must not be negative");
}
this.region = region;
this.clicks = clicks;
}
public String getRegion() {
return region;
}
public long getClicks() {
return clicks;
}
}
interface BindingsForTwoKStreamJoinTest {
String INPUT_1 = "input_1";
String INPUT_2 = "input_2";
@Input(INPUT_1)
KStream<String, String> input_1();
@Input(INPUT_2)
KStream<String, String> input_2();
}
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
@EnableAutoConfiguration
public static class JoinProcessor {
@StreamListener
public void testProcessor(
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
Joined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -0,0 +1,236 @@
/*
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
*/
public class WordCountMultipleBranchesIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts", "foo", "bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupx",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "foo", "bar");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(
WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
finally {
context.close();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context)
throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("english");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts");
assertThat(cr.value().contains("\"word\":\"english\",\"count\":1")).isTrue();
template.sendDefault("french");
template.sendDefault("french");
cr = KafkaTestUtils.getSingleRecord(consumer, "foo");
assertThat(cr.value().contains("\"word\":\"french\",\"count\":2")).isTrue();
template.sendDefault("spanish");
template.sendDefault("spanish");
template.sendDefault("spanish");
cr = KafkaTestUtils.getSingleRecord(consumer, "bar");
assertThat(cr.value().contains("\"word\":\"spanish\",\"count\":3")).isTrue();
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@StreamListener("input")
@SendTo({ "output1", "output2", "output3" })
@SuppressWarnings("unchecked")
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value).windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
.count(Materialized.as("WordCounts-multi")).toStream()
.map((key, value) -> new KeyValue<>(null,
new WordCount(key.key(), value,
new Date(key.window().start()),
new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.2.0-M2</version>
<version>3.1.4</version>
</parent>
<dependencies>
@@ -75,11 +75,6 @@
<artifactId>kafka_2.13</artifactId>
<classifier>test</classifier>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -102,13 +102,11 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
import org.springframework.kafka.listener.ConsumerProperties;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultAfterRollbackProcessor;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
@@ -732,17 +730,14 @@ public class KafkaMessageChannelBinder extends
kafkaMessageDrivenChannelAdapter.setApplicationContext(applicationContext);
ErrorInfrastructure errorInfrastructure = registerErrorInfrastructure(destination,
consumerGroup, extendedConsumerProperties);
if (!extendedConsumerProperties.isBatchMode()
&& extendedConsumerProperties.getMaxAttempts() > 1
&& transMan == null) {
kafkaMessageDrivenChannelAdapter
.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter
.setRecoveryCallback(errorInfrastructure.getRecoverer());
if (!extendedConsumerProperties.getExtension().isEnableDlq()) {
messageListenerContainer.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(0L, 0L)));
}
}
else if (!extendedConsumerProperties.isBatchMode() && transMan != null) {
messageListenerContainer.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(
@@ -775,12 +770,6 @@ public class KafkaMessageChannelBinder extends
else {
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorInfrastructure.getErrorChannel());
}
final String commonErrorHandlerBeanName = extendedConsumerProperties.getExtension().getCommonErrorHandlerBeanName();
if (StringUtils.hasText(commonErrorHandlerBeanName)) {
final CommonErrorHandler commonErrorHandler = getApplicationContext().getBean(commonErrorHandlerBeanName,
CommonErrorHandler.class);
messageListenerContainer.setCommonErrorHandler(commonErrorHandler);
}
this.getContainerCustomizer().configure(messageListenerContainer, destination.getName(), group);
this.ackModeInfo.put(destination, messageListenerContainer.getContainerProperties().getAckMode());
return kafkaMessageDrivenChannelAdapter;

View File

@@ -40,6 +40,10 @@ public class ExtendedBindingHandlerMappingsProviderConfiguration {
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.bindings"),
ConfigurationPropertyName.of("spring.cloud.stream.kafka.default"));
mappings.put(
ConfigurationPropertyName.of("spring.cloud.stream.kafka.streams"),
ConfigurationPropertyName
.of("spring.cloud.stream.kafka.streams.default"));
return mappings;
};
}

View File

@@ -22,6 +22,7 @@ import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
@@ -80,12 +81,22 @@ import org.springframework.messaging.converter.MessageConverter;
* @author Artem Bilan
* @author Aldo Sinanaj
*/
@Configuration(proxyBeanMethods = false)
@Configuration
@ConditionalOnMissingBean(Binder.class)
@Import({ KafkaAutoConfiguration.class, KafkaBinderHealthIndicatorConfiguration.class })
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
public class KafkaBinderConfiguration {
@Autowired
private KafkaExtendedBindingProperties kafkaExtendedBindingProperties;
@SuppressWarnings("rawtypes")
@Autowired
private ProducerListener producerListener;
@Autowired
private KafkaProperties kafkaProperties;
@Bean
KafkaBinderConfigurationProperties configurationProperties(
KafkaProperties kafkaProperties) {
@@ -95,12 +106,12 @@ public class KafkaBinderConfiguration {
@Bean
KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties configurationProperties,
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer, KafkaProperties kafkaProperties) {
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
return new KafkaTopicProvisioner(configurationProperties,
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
this.kafkaProperties, adminClientConfigCustomizer.getIfUnique());
}
@SuppressWarnings({"rawtypes", "unchecked"})
@SuppressWarnings("unchecked")
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder(
KafkaBinderConfigurationProperties configurationProperties,
@@ -114,17 +125,16 @@ public class KafkaBinderConfiguration {
ObjectProvider<DlqDestinationResolver> dlqDestinationResolver,
ObjectProvider<ClientFactoryCustomizer> clientFactoryCustomizer,
ObjectProvider<ConsumerConfigCustomizer> consumerConfigCustomizer,
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer,
ProducerListener producerListener, KafkaExtendedBindingProperties kafkaExtendedBindingProperties
ObjectProvider<ProducerConfigCustomizer> producerConfigCustomizer
) {
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
configurationProperties, provisioningProvider,
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique(),
dlqPartitionFunction.getIfUnique(), dlqDestinationResolver.getIfUnique());
kafkaMessageChannelBinder.setProducerListener(producerListener);
kafkaMessageChannelBinder.setProducerListener(this.producerListener);
kafkaMessageChannelBinder
.setExtendedBindingProperties(kafkaExtendedBindingProperties);
.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
kafkaMessageChannelBinder.setProducerMessageHandlerCustomizer(messageHandlerCustomizer);
kafkaMessageChannelBinder.setConsumerEndpointCustomizer(consumerCustomizer);
kafkaMessageChannelBinder.setClientFactoryCustomizer(clientFactoryCustomizer.getIfUnique());

View File

@@ -66,11 +66,12 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.assertj.core.api.Assertions;
import org.assertj.core.api.Condition;
import org.awaitility.Awaitility;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestInfo;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
import org.springframework.beans.DirectFieldAccessor;
import org.springframework.cloud.stream.binder.Binder;
@@ -118,11 +119,8 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.CommonErrorHandler;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.DefaultErrorHandler;
import org.springframework.kafka.listener.MessageListenerContainer;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
@@ -130,10 +128,8 @@ import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionOffset;
import org.springframework.kafka.support.converter.BatchMessagingMessageConverter;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
@@ -148,7 +144,6 @@ import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeTypeUtils;
import org.springframework.util.backoff.FixedBackOff;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
@@ -163,28 +158,29 @@ import static org.mockito.Mockito.mock;
* @author Henryk Konsek
* @author Gary Russell
*/
@EmbeddedKafka(count = 1, controlledShutdown = true, topics = "error.pollableDlq.group-pcWithDlq", brokerProperties = {"transaction.state.log.replication.factor=1",
"transaction.state.log.min.isr=1"})
public class KafkaBinderTests extends
// @checkstyle:off
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
PartitionCapableBinderTests<AbstractKafkaTestBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
// @checkstyle:on
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
@Rule
public ExpectedException expectedProvisioningException = ExpectedException.none();
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class
.getSimpleName();
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10,
"error.pollableDlq.group-pcWithDlq")
.brokerProperty("transaction.state.log.replication.factor", "1")
.brokerProperty("transaction.state.log.min.isr", "1");
private KafkaTestBinder binder;
private AdminClient adminClient;
private static EmbeddedKafkaBroker embeddedKafka;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
}
@Override
protected ExtendedConsumerProperties<KafkaConsumerProperties> createConsumerProperties() {
final ExtendedConsumerProperties<KafkaConsumerProperties> kafkaConsumerProperties = new ExtendedConsumerProperties<>(
@@ -195,12 +191,8 @@ public class KafkaBinderTests extends
return kafkaConsumerProperties;
}
private ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
return this.createProducerProperties(null);
}
@Override
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties(TestInfo testInto) {
protected ExtendedProducerProperties<KafkaProducerProperties> createProducerProperties() {
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
producerProperties.getExtension().setSync(true);
@@ -254,8 +246,8 @@ public class KafkaBinderTests extends
private KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties(
new TestKafkaProperties());
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -282,14 +274,15 @@ public class KafkaBinderTests extends
return KafkaHeaders.OFFSET;
}
@BeforeEach
@Before
public void init() {
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
if (multiplier != null) {
timeoutMultiplier = Double.parseDouble(multiplier);
}
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka()
.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -559,7 +552,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceiveNoOriginalContentType(TestInfo testInfo) throws Exception {
public void testSendAndReceiveNoOriginalContentType() throws Exception {
Binder binder = getBinder();
BindingProperties producerBindingProperties = createProducerBindingProperties(
@@ -609,7 +602,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceive(TestInfo testInfo) throws Exception {
public void testSendAndReceive() throws Exception {
Binder binder = getBinder();
BindingProperties outputBindingProperties = createProducerBindingProperties(
createProducerProperties());
@@ -735,6 +728,7 @@ public class KafkaBinderTests extends
@Test
@SuppressWarnings("unchecked")
@Ignore
public void testDlqWithNativeSerializationEnabledOnDlqProducer() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
@@ -793,10 +787,12 @@ public class KafkaBinderTests extends
.withPayload("foo").build();
moduleOutputChannel.send(message);
Message<?> receivedMessage = receive(dlqChannel, 5);
assertThat(receivedMessage).isNotNull();
assertThat(receivedMessage.getPayload()).isEqualTo("foo".getBytes());
Awaitility.await().until(() -> handler.getInvocationCount() == consumerProperties.getMaxAttempts());
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
assertThat(receivedMessage.getHeaders()
.get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo("foo.bar".getBytes(StandardCharsets.UTF_8));
@@ -1054,7 +1050,7 @@ public class KafkaBinderTests extends
AbstractMessageListenerContainer container = TestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
assertThat(container.getContainerProperties().getTopicPartitions().length)
assertThat(container.getContainerProperties().getTopicPartitionsToAssign().length)
.isEqualTo(4); // 2 topics 2 partitions each
if (transactional) {
assertThat(TestUtils.getPropertyValue(container.getAfterRollbackProcessor(), "kafkaTemplate")).isNotNull();
@@ -1065,7 +1061,7 @@ public class KafkaBinderTests extends
String dlqTopic = useDlqDestResolver ? "foo.dlq" : "error.dlqTest." + uniqueBindingId + ".0.testGroup";
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getBrokersAsString()))) {
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
if (useDlqDestResolver) {
List<NewTopic> nonProvisionedDlqTopics = new ArrayList<>();
NewTopic nTopic = new NewTopic(dlqTopic, 3, (short) 1);
@@ -1302,113 +1298,6 @@ public class KafkaBinderTests extends
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testRetriesWithoutDlq() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
// Since we don't have a DLQ, assert that we are invoking the handler exactly the same number of times
// as set in consumerProperties.maxAttempt and not the default set by Spring Kafka (10 times).
assertThat(handler.getInvocationCount())
.isEqualTo(consumerProperties.getMaxAttempts());
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
@Test
@SuppressWarnings("unchecked")
public void testCommonErrorHandlerBeanNameOnConsumerBinding() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
BindingProperties producerBindingProperties = createProducerBindingProperties(
producerProperties);
DirectChannel moduleOutputChannel = createBindableChannel("output",
producerBindingProperties);
CountDownLatch latch = new CountDownLatch(1);
CommonErrorHandler commonErrorHandler = new DefaultErrorHandler(new FixedBackOff(0L, 0L)) {
@Override
public void handleRemaining(Exception thrownException, List<ConsumerRecord<?, ?>> records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
super.handleRemaining(thrownException, records, consumer, container);
latch.countDown();
}
};
ConfigurableApplicationContext context = TestUtils.getPropertyValue(binder,
"binder.applicationContext", ConfigurableApplicationContext.class);
context.getBeanFactory().registerSingleton("fooCommonErrorHandler", commonErrorHandler);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setMaxAttempts(2);
consumerProperties.setBackOffInitialInterval(100);
consumerProperties.setBackOffMaxInterval(150);
consumerProperties.getExtension().setCommonErrorHandlerBeanName("fooCommonErrorHandler");
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
FailingInvocationCountingMessageHandler handler = new FailingInvocationCountingMessageHandler();
moduleInputChannel.subscribe(handler);
long uniqueBindingId = System.currentTimeMillis();
Binding<MessageChannel> producerBinding = binder.bindProducer(
"retryTest." + uniqueBindingId + ".0", moduleOutputChannel,
producerProperties);
Binding<MessageChannel> consumerBinding = binder.bindConsumer(
"retryTest." + uniqueBindingId + ".0", "testGroup", moduleInputChannel,
consumerProperties);
String testMessagePayload = "test." + UUID.randomUUID();
Message<byte[]> testMessage = MessageBuilder
.withPayload(testMessagePayload.getBytes()).build();
moduleOutputChannel.send(testMessage);
Thread.sleep(3000);
//Assertions for the CommonErrorHandler configured on the consumer binding (commonErrorHandlerBeanName).
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer.commonErrorHandler")).isSameAs(commonErrorHandler);
latch.await(10, TimeUnit.SECONDS);
binderBindUnbindLatency();
consumerBinding.unbind();
producerBinding.unbind();
}
//See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870 for motivation for this test.
@Test
@SuppressWarnings("unchecked")
@@ -1570,15 +1459,9 @@ public class KafkaBinderTests extends
producerBinding.unbind();
}
@Test
@Test(expected = IllegalArgumentException.class)
public void testValidateKafkaTopicName() {
try {
KafkaTopicUtils.validateTopicName("foo:bar");
fail("Expecting IllegalArgumentException");
}
catch (Exception e) {
// TODO: handle exception
}
KafkaTopicUtils.validateTopicName("foo:bar");
}
@Test
@@ -1716,7 +1599,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testSendAndReceiveMultipleTopics(TestInfo testInfo) throws Exception {
public void testSendAndReceiveMultipleTopics() throws Exception {
Binder binder = getBinder();
DirectChannel moduleOutputChannel1 = createBindableChannel("output1",
@@ -1877,7 +1760,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testTwoRequiredGroups(TestInfo testInfo) throws Exception {
public void testTwoRequiredGroups() throws Exception {
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
@@ -1927,7 +1810,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testPartitionedModuleSpEL(TestInfo testInfo) throws Exception {
public void testPartitionedModuleSpEL() throws Exception {
Binder binder = getBinder();
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
@@ -2050,7 +1933,7 @@ public class KafkaBinderTests extends
}
@Test
// @Override
@Override
@SuppressWarnings({ "unchecked", "rawtypes" })
public void testPartitionedModuleJava() throws Exception {
Binder binder = getBinder();
@@ -2138,7 +2021,7 @@ public class KafkaBinderTests extends
@Test
@Override
@SuppressWarnings("unchecked")
public void testAnonymousGroup(TestInfo testInfo) throws Exception {
public void testAnonymousGroup() throws Exception {
Binder binder = getBinder();
BindingProperties producerBindingProperties = createProducerBindingProperties(
createProducerProperties());
@@ -3005,13 +2888,14 @@ public class KafkaBinderTests extends
consumerProperties.setInstanceCount(3);
consumerProperties.setInstanceIndex(2);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
Assertions.assertThatThrownBy(() -> {
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
}).isInstanceOf(ProvisioningException.class);
expectedProvisioningException.expect(ProvisioningException.class);
expectedProvisioningException.expectMessage(
"The number of expected partitions was: 3, but 1 has been found instead");
Binding binding = binder.bindConsumer(testTopicName, "test", output,
consumerProperties);
if (binding != null) {
binding.unbind();
}
}
@Test
@@ -3043,7 +2927,7 @@ public class KafkaBinderTests extends
binding,
"lifecycle.messageListenerContainer.containerProperties",
ContainerProperties.class);
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitions();
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitionsToAssign();
assertThat(listenedPartitions).hasSize(2);
assertThat(listenedPartitions).contains(
new TopicPartitionOffset(testTopicName, 2),
@@ -3406,7 +3290,7 @@ public class KafkaBinderTests extends
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(
"testSendAndReceiveWithMixedMode", "false",
embeddedKafka);
embeddedKafka.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class);
@@ -3451,7 +3335,7 @@ public class KafkaBinderTests extends
"pollable,anotherOne", "group-polledConsumer", inboundBindTarget,
consumerProps);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka);
.producerProps(embeddedKafka.getEmbeddedKafka());
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollable", "testPollable");
@@ -3502,7 +3386,7 @@ public class KafkaBinderTests extends
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableRequeue", "group", inboundBindTarget, properties);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka);
.producerProps(embeddedKafka.getEmbeddedKafka());
KafkaTemplate template = new KafkaTemplate(
new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollableRequeue", "testPollable");
@@ -3538,7 +3422,7 @@ public class KafkaBinderTests extends
properties.setBackOffInitialInterval(0);
properties.getExtension().setEnableDlq(true);
Map<String, Object> producerProps = KafkaTestUtils
.producerProps(embeddedKafka);
.producerProps(embeddedKafka.getEmbeddedKafka());
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer(
"pollableDlq", "group-pcWithDlq", inboundBindTarget, properties);
KafkaTemplate template = new KafkaTemplate(
@@ -3557,11 +3441,11 @@ public class KafkaBinderTests extends
assertThat(e.getCause().getMessage()).isEqualTo("test DLQ");
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false",
embeddedKafka);
embeddedKafka.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory cf = new DefaultKafkaConsumerFactory<>(consumerProps);
Consumer consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer,
embeddedKafka.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer,
"error.pollableDlq.group-pcWithDlq");
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer,
"error.pollableDlq.group-pcWithDlq");
@@ -3576,7 +3460,7 @@ public class KafkaBinderTests extends
public void testTopicPatterns() throws Exception {
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getBrokersAsString()))) {
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("topicPatterns.1", 1, (short) 1))).all()
.get();
@@ -3595,7 +3479,7 @@ public class KafkaBinderTests extends
"topicPatterns\\..*", "testTopicPatterns", moduleInputChannel,
consumerProperties);
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(
KafkaTestUtils.producerProps(embeddedKafka));
KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka()));
KafkaTemplate template = new KafkaTemplate(pf);
template.send("topicPatterns.1", "foo");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
@@ -3605,12 +3489,11 @@ public class KafkaBinderTests extends
}
}
@Test
@Test(expected = TopicExistsException.class)
public void testSameTopicCannotBeProvisionedAgain() throws Throwable {
CountDownLatch latch = new CountDownLatch(1);
try (AdminClient admin = AdminClient.create(
Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getBrokersAsString()))) {
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
admin.createTopics(Collections
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1))).all()
.get();
@@ -3618,13 +3501,11 @@ public class KafkaBinderTests extends
admin.createTopics(Collections
.singletonList(new NewTopic("fooUniqueTopic", 1, (short) 1)))
.all().get();
fail("Expecting TopicExistsException");
}
catch (Exception ex) {
assertThat(ex.getCause() instanceof TopicExistsException).isTrue();
latch.countDown();
throw ex.getCause();
}
latch.await(1, TimeUnit.SECONDS);
}
}
@@ -3826,7 +3707,7 @@ public class KafkaBinderTests extends
input.setBeanName(name + ".in");
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
Binding<MessageChannel> consumerBinding = binder.bindConsumer(name + ".0", name, input, consumerProperties);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send(MessageBuilder.withPayload("internalHeaderPropagation")
.setHeader(KafkaHeaders.TOPIC, name + ".0")
@@ -3840,7 +3721,7 @@ public class KafkaBinderTests extends
output.send(consumed);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(name, "false",
embeddedKafka);
embeddedKafka.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2019-2021 the original author or authors.
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,28 +16,18 @@
package org.springframework.cloud.stream.binder.kafka.bootstrap;
import java.util.Map;
import java.util.function.Function;
import io.micrometer.core.instrument.MeterRegistry;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.condition.EmbeddedKafkaCondition;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatCode;
@@ -45,53 +35,20 @@ import static org.assertj.core.api.Assertions.assertThatCode;
/**
* @author Soby Chacko
*/
@EmbeddedKafka(count = 1, controlledShutdown = true, partitions = 10, topics = "outputTopic")
public class KafkaBinderMeterRegistryTest {
private static EmbeddedKafkaBroker embeddedKafka;
private static Consumer<String, String> consumer;
@BeforeAll
public static void setup() {
embeddedKafka = EmbeddedKafkaCondition.getBroker();
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromEmbeddedTopics(consumer, "outputTopic");
}
@AfterAll
public static void tearDown() {
consumer.close();
}
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testMetricsWithSingleBinder() throws Exception {
public void testMetricsWithSingleBinder() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.uppercase-in-0.destination=inputTopic",
"--spring.cloud.stream.bindings.uppercase-in-0.group=inputGroup",
"--spring.cloud.stream.bindings.uppercase-out-0.destination=outputTopic",
"--spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -111,22 +68,10 @@ public class KafkaBinderMeterRegistryTest {
"--spring.cloud.stream.binders.kafka2.type=kafka",
"--spring.cloud.stream.binders.kafka1.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getBrokersAsString(),
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.binders.kafka2.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getBrokersAsString());
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("inputTopic");
template.sendDefault("foo");
// Forcing the retrieval of the data on the outbound so that the producer factory has
// a chance to add the micrometer listener properly. Only on the first send, binder's
// internal KafkaTemplate adds the Micrometer listener (using the producer factory).
KafkaTestUtils.getSingleRecord(consumer, "outputTopic");
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);
assertMeterRegistry(meterRegistry);
@@ -142,10 +87,10 @@ public class KafkaBinderMeterRegistryTest {
.tag("topic", "inputTopic").gauge().value()).isNotNull();
// assert consumer metrics
assertThatCode(() -> meterRegistry.get("kafka.consumer.fetch.manager.fetch.total").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.consumer.connection.count").meter()).doesNotThrowAnyException();
// assert producer metrics
assertThatCode(() -> meterRegistry.get("kafka.producer.io.ratio").meter()).doesNotThrowAnyException();
assertThatCode(() -> meterRegistry.get("kafka.producer.connection.count").meter()).doesNotThrowAnyException();
}
@SpringBootApplication