Compare commits
52 Commits
v3.0.0.M4
...
v3.0.0.RC2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
464ce685bb | ||
|
|
54fa9a638d | ||
|
|
fefd9a3bd6 | ||
|
|
8e26d5e170 | ||
|
|
ac6bdc976e | ||
|
|
65386f6967 | ||
|
|
81c453086a | ||
|
|
0ddd9f8f64 | ||
|
|
d0fe596a9e | ||
|
|
062bbc1cc3 | ||
|
|
bc1936eb28 | ||
|
|
e2f1092173 | ||
|
|
06e5739fbd | ||
|
|
ad8e67fdc5 | ||
|
|
a3fb4cc3b3 | ||
|
|
7f09baf72d | ||
|
|
28a02cda4f | ||
|
|
f96a9f884c | ||
|
|
a9020368e5 | ||
|
|
ca9296dbd2 | ||
|
|
e8d202404b | ||
|
|
5794fb983c | ||
|
|
1ce1d7918f | ||
|
|
1264434871 | ||
|
|
e4fed0a52d | ||
|
|
05e2918bc0 | ||
|
|
6dadf0c104 | ||
|
|
f4dcf5100c | ||
|
|
b8eb41cb87 | ||
|
|
82cfd6d176 | ||
|
|
6866eef8b0 | ||
|
|
b833a9f371 | ||
|
|
0283359d4a | ||
|
|
cc2f8f6137 | ||
|
|
855334aaa3 | ||
|
|
9d708f836a | ||
|
|
ecc8715b0c | ||
|
|
98431ed8a0 | ||
|
|
c7fa1ce275 | ||
|
|
cc8c645c5a | ||
|
|
21fe9c75c5 | ||
|
|
65dd706a6a | ||
|
|
a02308a5a3 | ||
|
|
ac75f8fecf | ||
|
|
bf6a227f32 | ||
|
|
01daa4c0dd | ||
|
|
021943ec41 | ||
|
|
daf4b47d1c | ||
|
|
2b1be3754d | ||
|
|
db5a303431 | ||
|
|
e549090787 | ||
|
|
7ff64098a3 |
22
README.adoc
22
README.adoc
@@ -39,7 +39,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
|
||||
</dependency>
|
||||
----
|
||||
|
||||
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
|
||||
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
@@ -60,7 +60,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
|
||||
The consumer group maps directly to the same Apache Kafka concept.
|
||||
Partitioning also maps directly to Apache Kafka partitions as well.
|
||||
|
||||
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
|
||||
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
|
||||
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
|
||||
For example, with versions earlier than 0.11.x.x, native headers are not supported.
|
||||
Also, 0.11.x.x does not support the `autoAddPartitions` property.
|
||||
@@ -155,14 +155,15 @@ Default: See individual producer properties.
|
||||
|
||||
spring.cloud.stream.kafka.binder.headerMapperBeanName::
|
||||
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
|
||||
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
|
||||
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
|
||||
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[kafka-consumer-properties]]
|
||||
==== Kafka Consumer Properties
|
||||
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
|
||||
|
||||
|
||||
The following properties are available for Kafka consumers only and
|
||||
@@ -227,9 +228,20 @@ The DLQ topic name can be configurable by setting the `dlqName` property.
|
||||
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
|
||||
See <<kafka-dlq-processing>> processing for more information.
|
||||
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
|
||||
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
|
||||
See <<dlq-partition-selection>> for how to change that behavior.
|
||||
**Not allowed when `destinationIsPattern` is `true`.**
|
||||
+
|
||||
Default: `false`.
|
||||
dlqPartitions::
|
||||
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
|
||||
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
|
||||
This behavior can be changed; see <<dlq-partition-selection>>.
|
||||
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
|
||||
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
|
||||
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
|
||||
+
|
||||
Default: `none`
|
||||
configuration::
|
||||
Map with a key/value pair containing generic Kafka consumer properties.
|
||||
In addition to having Kafka consumer properties, other configuration properties can be passed here.
|
||||
@@ -304,7 +316,7 @@ Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/refer
|
||||
[[kafka-producer-properties]]
|
||||
==== Kafka Producer Properties
|
||||
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
|
||||
|
||||
|
||||
The following properties are available for Kafka producers only and
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
</parent>
|
||||
<packaging>pom</packaging>
|
||||
<name>spring-cloud-stream-binder-kafka-docs</name>
|
||||
|
||||
@@ -1,7 +1,34 @@
|
||||
[[kafka-dlq-processing]]
|
||||
=== Dead-Letter Topic Processing
|
||||
|
||||
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
|
||||
[[dlq-partition-selection]]
|
||||
==== Dead-Letter Topic Partition Selection
|
||||
|
||||
By default, records are published to the Dead-Letter topic using the same partition as the original record.
|
||||
This means the Dead-Letter topic must have at least as many partitions as the original record.
|
||||
|
||||
To change this behavior, add a `DlqPartitionFunction` implementation as a `@Bean` to the application context.
|
||||
Only one such bean can be present.
|
||||
The function is provided with the consumer group, the failed `ConsumerRecord` and the exception.
|
||||
For example, if you always want to route to partition 0, you might use:
|
||||
|
||||
====
|
||||
[source, java]
|
||||
----
|
||||
@Bean
|
||||
public DlqPartitionFunction partitionFunction() {
|
||||
return (group, record, ex) -> 0;
|
||||
}
|
||||
----
|
||||
====
|
||||
|
||||
NOTE: If you set a consumer binding's `dlqPartitions` property to 1 (and the binder's `minPartitionCount` is equal to `1`), there is no need to supply a `DlqPartitionFunction`; the framework will always use partition 0.
|
||||
If you set a consumer binding's `dlqPartitions` property to a value greater than `1` (or the binder's `minPartitionCount` is greater than `1`), you **must** provide a `DlqPartitionFunction` bean, even if the partition count is the same as the original topic's.
|
||||
|
||||
[[dlq-handling]]
|
||||
==== Handling Records in a Dead-Letter Topic
|
||||
|
||||
Because the framework cannot anticipate how users would want to dispose of dead-lettered messages, it does not provide any standard mechanism to handle them.
|
||||
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
|
||||
However, if the problem is a permanent issue, that could cause an infinite loop.
|
||||
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -19,7 +19,7 @@ To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` a
|
||||
</dependency>
|
||||
----
|
||||
|
||||
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
|
||||
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
|
||||
|
||||
[source,xml]
|
||||
----
|
||||
@@ -40,7 +40,7 @@ The Apache Kafka Binder implementation maps each destination to an Apache Kafka
|
||||
The consumer group maps directly to the same Apache Kafka concept.
|
||||
Partitioning also maps directly to Apache Kafka partitions as well.
|
||||
|
||||
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
|
||||
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
|
||||
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
|
||||
For example, with versions earlier than 0.11.x.x, native headers are not supported.
|
||||
Also, 0.11.x.x does not support the `autoAddPartitions` property.
|
||||
@@ -135,14 +135,15 @@ Default: See individual producer properties.
|
||||
|
||||
spring.cloud.stream.kafka.binder.headerMapperBeanName::
|
||||
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
|
||||
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
|
||||
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
|
||||
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[kafka-consumer-properties]]
|
||||
==== Kafka Consumer Properties
|
||||
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
|
||||
|
||||
|
||||
The following properties are available for Kafka consumers only and
|
||||
@@ -207,9 +208,20 @@ The DLQ topic name can be configurable by setting the `dlqName` property.
|
||||
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
|
||||
See <<kafka-dlq-processing>> processing for more information.
|
||||
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
|
||||
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
|
||||
See <<dlq-partition-selection>> for how to change that behavior.
|
||||
**Not allowed when `destinationIsPattern` is `true`.**
|
||||
+
|
||||
Default: `false`.
|
||||
dlqPartitions::
|
||||
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
|
||||
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
|
||||
This behavior can be changed; see <<dlq-partition-selection>>.
|
||||
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
|
||||
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
|
||||
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
|
||||
+
|
||||
Default: `none`
|
||||
configuration::
|
||||
Map with a key/value pair containing generic Kafka consumer properties.
|
||||
In addition to having Kafka consumer properties, other configuration properties can be passed here.
|
||||
@@ -284,7 +296,7 @@ Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/refer
|
||||
[[kafka-producer-properties]]
|
||||
==== Kafka Producer Properties
|
||||
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
|
||||
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
|
||||
|
||||
|
||||
The following properties are available for Kafka producers only and
|
||||
|
||||
18
pom.xml
18
pom.xml
@@ -2,21 +2,21 @@
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
<packaging>pom</packaging>
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-build</artifactId>
|
||||
<version>2.2.0.M5</version>
|
||||
<version>2.2.0.RC2</version>
|
||||
<relativePath />
|
||||
</parent>
|
||||
<properties>
|
||||
<java.version>1.8</java.version>
|
||||
<spring-kafka.version>2.3.0.RC1</spring-kafka.version>
|
||||
<spring-integration-kafka.version>3.2.0.RC1</spring-integration-kafka.version>
|
||||
<kafka.version>2.3.0</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.0.0.M1</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.0.0.M4</spring-cloud-stream.version>
|
||||
<spring-kafka.version>2.3.2.RELEASE</spring-kafka.version>
|
||||
<spring-integration-kafka.version>3.2.1.RELEASE</spring-integration-kafka.version>
|
||||
<kafka.version>2.3.1</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.0.0.RC2</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.0.0.RC2</spring-cloud-stream.version>
|
||||
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
|
||||
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
|
||||
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
|
||||
@@ -124,6 +124,10 @@
|
||||
<build>
|
||||
<pluginManagement>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>flatten-maven-plugin</artifactId>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
|
||||
<description>Spring Cloud Starter Stream Kafka</description>
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
|
||||
<description>Spring Cloud Stream Kafka Binder Core</description>
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
/*
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.properties;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Properties for configuring topics.
|
||||
*
|
||||
* @author Gary Russell
|
||||
* @since 2.0
|
||||
* @deprecated in favor of {@link KafkaTopicProperties}
|
||||
*/
|
||||
@Deprecated
|
||||
public class KafkaAdminProperties extends KafkaTopicProperties {
|
||||
|
||||
public Map<String, String> getConfiguration() {
|
||||
return getProperties();
|
||||
}
|
||||
|
||||
public void setConfiguration(Map<String, String> configuration) {
|
||||
setProperties(configuration);
|
||||
}
|
||||
|
||||
}
|
||||
@@ -32,7 +32,6 @@ import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
|
||||
import org.springframework.cloud.stream.binder.HeaderMode;
|
||||
import org.springframework.cloud.stream.binder.ProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
|
||||
@@ -64,8 +63,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
|
||||
private final KafkaProperties kafkaProperties;
|
||||
|
||||
private String[] zkNodes = new String[] { "localhost" };
|
||||
|
||||
/**
|
||||
* Arbitrary kafka properties that apply to both producers and consumers.
|
||||
*/
|
||||
@@ -81,48 +78,22 @@ public class KafkaBinderConfigurationProperties {
|
||||
*/
|
||||
private Map<String, String> producerProperties = new HashMap<>();
|
||||
|
||||
private String defaultZkPort = "2181";
|
||||
|
||||
private String[] brokers = new String[] { "localhost" };
|
||||
|
||||
private String defaultBrokerPort = "9092";
|
||||
|
||||
private String[] headers = new String[] {};
|
||||
|
||||
private int offsetUpdateTimeWindow = 10000;
|
||||
|
||||
private int offsetUpdateCount;
|
||||
|
||||
private int offsetUpdateShutdownTimeout = 2000;
|
||||
|
||||
private int maxWait = 100;
|
||||
|
||||
private boolean autoCreateTopics = true;
|
||||
|
||||
private boolean autoAddPartitions;
|
||||
|
||||
private int socketBufferSize = 2097152;
|
||||
|
||||
/**
|
||||
* ZK session timeout in milliseconds.
|
||||
*/
|
||||
private int zkSessionTimeout = 10000;
|
||||
|
||||
/**
|
||||
* ZK Connection timeout in milliseconds.
|
||||
*/
|
||||
private int zkConnectionTimeout = 10000;
|
||||
|
||||
private String requiredAcks = "1";
|
||||
|
||||
private short replicationFactor = 1;
|
||||
|
||||
private int fetchSize = 1024 * 1024;
|
||||
|
||||
private int minPartitionCount = 1;
|
||||
|
||||
private int queueSize = 8192;
|
||||
|
||||
/**
|
||||
* Time to wait to get partition information in seconds; default 60.
|
||||
*/
|
||||
@@ -149,17 +120,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
return this.transaction;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the connection String
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
@Deprecated
|
||||
public String getZkConnectionString() {
|
||||
return toConnectionString(this.zkNodes, this.defaultZkPort);
|
||||
}
|
||||
|
||||
public String getKafkaConnectionString() {
|
||||
return toConnectionString(this.brokers, this.defaultBrokerPort);
|
||||
}
|
||||
@@ -172,72 +132,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
return this.headers;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the window.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getOffsetUpdateTimeWindow() {
|
||||
return this.offsetUpdateTimeWindow;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the count.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getOffsetUpdateCount() {
|
||||
return this.offsetUpdateCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the timeout.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getOffsetUpdateShutdownTimeout() {
|
||||
return this.offsetUpdateShutdownTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper nodes.
|
||||
* @return the nodes.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public String[] getZkNodes() {
|
||||
return this.zkNodes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper nodes.
|
||||
* @param zkNodes the nodes.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public void setZkNodes(String... zkNodes) {
|
||||
this.zkNodes = zkNodes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper port.
|
||||
* @param defaultZkPort the port.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public void setDefaultZkPort(String defaultZkPort) {
|
||||
this.defaultZkPort = defaultZkPort;
|
||||
}
|
||||
|
||||
public String[] getBrokers() {
|
||||
return this.brokers;
|
||||
}
|
||||
@@ -254,83 +148,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
this.headers = headers;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @param offsetUpdateTimeWindow the window.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setOffsetUpdateTimeWindow(int offsetUpdateTimeWindow) {
|
||||
this.offsetUpdateTimeWindow = offsetUpdateTimeWindow;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @param offsetUpdateCount the count.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setOffsetUpdateCount(int offsetUpdateCount) {
|
||||
this.offsetUpdateCount = offsetUpdateCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @param offsetUpdateShutdownTimeout the timeout.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setOffsetUpdateShutdownTimeout(int offsetUpdateShutdownTimeout) {
|
||||
this.offsetUpdateShutdownTimeout = offsetUpdateShutdownTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper session timeout.
|
||||
* @return the timeout.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public int getZkSessionTimeout() {
|
||||
return this.zkSessionTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper session timeout.
|
||||
* @param zkSessionTimeout the timout
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public void setZkSessionTimeout(int zkSessionTimeout) {
|
||||
this.zkSessionTimeout = zkSessionTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper connection timeout.
|
||||
* @return the timout.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public int getZkConnectionTimeout() {
|
||||
return this.zkConnectionTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Zookeeper connection timeout.
|
||||
* @param zkConnectionTimeout the timeout.
|
||||
* @deprecated connection to zookeeper is no longer necessary
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
|
||||
public void setZkConnectionTimeout(int zkConnectionTimeout) {
|
||||
this.zkConnectionTimeout = zkConnectionTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Converts an array of host values to a comma-separated String. It will append the
|
||||
* default port value, if not already specified.
|
||||
@@ -351,28 +168,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the wait.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getMaxWait() {
|
||||
return this.maxWait;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer user.
|
||||
* @param maxWait the wait.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setMaxWait(int maxWait) {
|
||||
this.maxWait = maxWait;
|
||||
}
|
||||
|
||||
public String getRequiredAcks() {
|
||||
return this.requiredAcks;
|
||||
}
|
||||
@@ -389,28 +184,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
this.replicationFactor = replicationFactor;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getFetchSize() {
|
||||
return this.fetchSize;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @param fetchSize the size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setFetchSize(int fetchSize) {
|
||||
this.fetchSize = fetchSize;
|
||||
}
|
||||
|
||||
public int getMinPartitionCount() {
|
||||
return this.minPartitionCount;
|
||||
}
|
||||
@@ -427,28 +200,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
this.healthTimeout = healthTimeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @return the queue size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public int getQueueSize() {
|
||||
return this.queueSize;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used.
|
||||
* @param queueSize the queue size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
|
||||
public void setQueueSize(int queueSize) {
|
||||
this.queueSize = queueSize;
|
||||
}
|
||||
|
||||
public boolean isAutoCreateTopics() {
|
||||
return this.autoCreateTopics;
|
||||
}
|
||||
@@ -465,30 +216,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
this.autoAddPartitions = autoAddPartitions;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used; set properties such as this via {@link #getConfiguration()
|
||||
* configuration}.
|
||||
* @return the size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
|
||||
public int getSocketBufferSize() {
|
||||
return this.socketBufferSize;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used; set properties such as this via {@link #getConfiguration()
|
||||
* configuration}.
|
||||
* @param socketBufferSize the size.
|
||||
* @deprecated No longer used by the binder
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
|
||||
public void setSocketBufferSize(int socketBufferSize) {
|
||||
this.socketBufferSize = socketBufferSize;
|
||||
}
|
||||
|
||||
public Map<String, String> getConfiguration() {
|
||||
return this.configuration;
|
||||
}
|
||||
@@ -800,16 +527,6 @@ public class KafkaBinderConfigurationProperties {
|
||||
this.kafkaProducerProperties.setConfiguration(configuration);
|
||||
}
|
||||
|
||||
@SuppressWarnings("deprecation")
|
||||
public KafkaAdminProperties getAdmin() {
|
||||
return this.kafkaProducerProperties.getAdmin();
|
||||
}
|
||||
|
||||
@SuppressWarnings("deprecation")
|
||||
public void setAdmin(KafkaAdminProperties admin) {
|
||||
this.kafkaProducerProperties.setAdmin(admin);
|
||||
}
|
||||
|
||||
public KafkaTopicProperties getTopic() {
|
||||
return this.kafkaProducerProperties.getTopic();
|
||||
}
|
||||
|
||||
@@ -19,8 +19,6 @@ package org.springframework.cloud.stream.binder.kafka.properties;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
|
||||
|
||||
/**
|
||||
* Extended consumer properties for Kafka binder.
|
||||
*
|
||||
@@ -102,6 +100,8 @@ public class KafkaConsumerProperties {
|
||||
|
||||
private String dlqName;
|
||||
|
||||
private Integer dlqPartitions;
|
||||
|
||||
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
|
||||
|
||||
private int recoveryInterval = 5000;
|
||||
@@ -217,6 +217,14 @@ public class KafkaConsumerProperties {
|
||||
this.dlqName = dlqName;
|
||||
}
|
||||
|
||||
public Integer getDlqPartitions() {
|
||||
return this.dlqPartitions;
|
||||
}
|
||||
|
||||
public void setDlqPartitions(Integer dlqPartitions) {
|
||||
this.dlqPartitions = dlqPartitions;
|
||||
}
|
||||
|
||||
public String[] getTrustedPackages() {
|
||||
return this.trustedPackages;
|
||||
}
|
||||
@@ -265,29 +273,6 @@ public class KafkaConsumerProperties {
|
||||
this.destinationIsPattern = destinationIsPattern;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used; get properties such as this via {@link #getTopic()}.
|
||||
* @return Kafka admin properties
|
||||
* @deprecated No longer used
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
|
||||
@SuppressWarnings("deprecation")
|
||||
public KafkaAdminProperties getAdmin() {
|
||||
// Temporary workaround to copy the topic properties to the admin one.
|
||||
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
|
||||
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
|
||||
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
|
||||
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
|
||||
return kafkaAdminProperties;
|
||||
}
|
||||
|
||||
@Deprecated
|
||||
@SuppressWarnings("deprecation")
|
||||
public void setAdmin(KafkaAdminProperties admin) {
|
||||
this.topic = admin;
|
||||
}
|
||||
|
||||
public KafkaTopicProperties getTopic() {
|
||||
return this.topic;
|
||||
}
|
||||
|
||||
@@ -21,7 +21,6 @@ import java.util.Map;
|
||||
|
||||
import javax.validation.constraints.NotNull;
|
||||
|
||||
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
|
||||
import org.springframework.expression.Expression;
|
||||
|
||||
/**
|
||||
@@ -121,29 +120,6 @@ public class KafkaProducerProperties {
|
||||
this.configuration = configuration;
|
||||
}
|
||||
|
||||
/**
|
||||
* No longer used; get properties such as this via {@link #getTopic()}.
|
||||
* @return Kafka admin properties
|
||||
* @deprecated No longer used
|
||||
*/
|
||||
@Deprecated
|
||||
@DeprecatedConfigurationProperty(reason = "Not used since 2.1.1, set properties such as this via 'topic'")
|
||||
@SuppressWarnings("deprecation")
|
||||
public KafkaAdminProperties getAdmin() {
|
||||
// Temporary workaround to copy the topic properties to the admin one.
|
||||
final KafkaAdminProperties kafkaAdminProperties = new KafkaAdminProperties();
|
||||
kafkaAdminProperties.setReplicationFactor(this.topic.getReplicationFactor());
|
||||
kafkaAdminProperties.setReplicasAssignments(this.topic.getReplicasAssignments());
|
||||
kafkaAdminProperties.setConfiguration(this.topic.getProperties());
|
||||
return kafkaAdminProperties;
|
||||
}
|
||||
|
||||
@Deprecated
|
||||
@SuppressWarnings("deprecation")
|
||||
public void setAdmin(KafkaAdminProperties admin) {
|
||||
this.topic = admin;
|
||||
}
|
||||
|
||||
public KafkaTopicProperties getTopic() {
|
||||
return this.topic;
|
||||
}
|
||||
|
||||
@@ -81,9 +81,9 @@ public class KafkaTopicProvisioner implements
|
||||
// @checkstyle:on
|
||||
InitializingBean {
|
||||
|
||||
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
|
||||
private static final Log logger = LogFactory.getLog(KafkaTopicProvisioner.class);
|
||||
|
||||
private final Log logger = LogFactory.getLog(getClass());
|
||||
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
|
||||
|
||||
private final KafkaBinderConfigurationProperties configurationProperties;
|
||||
|
||||
@@ -242,7 +242,7 @@ public class KafkaTopicProvisioner implements
|
||||
* @param bootProps the boot kafka properties.
|
||||
* @param binderProps the binder kafka properties.
|
||||
*/
|
||||
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
|
||||
public static void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
|
||||
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
|
||||
// First deal with the outlier
|
||||
String kafkaConnectionString = binderProps.getKafkaConnectionString();
|
||||
@@ -263,8 +263,8 @@ public class KafkaTopicProvisioner implements
|
||||
}
|
||||
if (adminConfigNames.contains(key)) {
|
||||
Object replaced = adminProps.put(key, value);
|
||||
if (replaced != null && this.logger.isDebugEnabled()) {
|
||||
this.logger.debug("Overrode boot property: [" + key + "], from: ["
|
||||
if (replaced != null && KafkaTopicProvisioner.logger.isDebugEnabled()) {
|
||||
KafkaTopicProvisioner.logger.debug("Overrode boot property: [" + key + "], from: ["
|
||||
+ replaced + "] to: [" + value + "]");
|
||||
}
|
||||
}
|
||||
@@ -274,12 +274,16 @@ public class KafkaTopicProvisioner implements
|
||||
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
|
||||
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
|
||||
boolean anonymous, int partitions) {
|
||||
|
||||
if (properties.getExtension().isEnableDlq() && !anonymous) {
|
||||
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
|
||||
? properties.getExtension().getDlqName()
|
||||
: "error." + name + "." + group;
|
||||
int dlqPartitions = properties.getExtension().getDlqPartitions() == null
|
||||
? partitions
|
||||
: properties.getExtension().getDlqPartitions();
|
||||
try {
|
||||
createTopicAndPartitions(adminClient, dlqTopic, partitions,
|
||||
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
|
||||
properties.getExtension().isAutoRebalanceEnabled(),
|
||||
properties.getExtension().getTopic());
|
||||
}
|
||||
|
||||
@@ -0,0 +1,76 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.utils;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
|
||||
import org.springframework.lang.Nullable;
|
||||
|
||||
/**
|
||||
* A TriFunction that takes a consumer group, consumer record, and throwable and returns
|
||||
* which partition to publish to the dead letter topic. Returning {@code null} means Kafka
|
||||
* will choose the partition.
|
||||
*
|
||||
* @author Gary Russell
|
||||
* @since 3.0
|
||||
*
|
||||
*/
|
||||
@FunctionalInterface
|
||||
public interface DlqPartitionFunction {
|
||||
|
||||
/**
|
||||
* Returns the same partition as the original recor.
|
||||
*/
|
||||
DlqPartitionFunction ORIGINAL_PARTITION = (group, rec, ex) -> rec.partition();
|
||||
|
||||
/**
|
||||
* Returns 0.
|
||||
*/
|
||||
DlqPartitionFunction PARTITION_ZERO = (group, rec, ex) -> 0;
|
||||
|
||||
/**
|
||||
* Apply the function.
|
||||
* @param group the consumer group.
|
||||
* @param record the consumer record.
|
||||
* @param throwable the exception.
|
||||
* @return the DLQ partition, or null.
|
||||
*/
|
||||
@Nullable
|
||||
Integer apply(String group, ConsumerRecord<?, ?> record, Throwable throwable);
|
||||
|
||||
/**
|
||||
* Determine the fallback function to use based on the dlq partition count if no
|
||||
* {@link DlqPartitionFunction} bean is provided.
|
||||
* @param dlqPartitions the partition count.
|
||||
* @param logger the logger.
|
||||
* @return the fallback.
|
||||
*/
|
||||
static DlqPartitionFunction determineFallbackFunction(@Nullable Integer dlqPartitions, Log logger) {
|
||||
if (dlqPartitions == null) {
|
||||
return ORIGINAL_PARTITION;
|
||||
}
|
||||
else if (dlqPartitions > 1) {
|
||||
logger.error("'dlqPartitions' is > 1 but a custom DlqPartitionFunction bean is not provided");
|
||||
return ORIGINAL_PARTITION;
|
||||
}
|
||||
else {
|
||||
return PARTITION_ZERO;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
</parent>
|
||||
|
||||
<properties>
|
||||
@@ -86,39 +86,39 @@
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
|
||||
<!-- <dependency>-->
|
||||
<!-- <groupId>org.springframework.cloud</groupId>-->
|
||||
<!-- <artifactId>spring-cloud-schema-registry-client</artifactId>-->
|
||||
<!-- <scope>test</scope>-->
|
||||
<!-- </dependency>-->
|
||||
<!-- <dependency>-->
|
||||
<!-- <groupId>org.apache.avro</groupId>-->
|
||||
<!-- <artifactId>avro</artifactId>-->
|
||||
<!-- <version>${avro.version}</version>-->
|
||||
<!-- <scope>provided</scope>-->
|
||||
<!-- </dependency>-->
|
||||
<dependency>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-schema-registry-client</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro</artifactId>
|
||||
<version>${avro.version}</version>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<!-- <plugin>-->
|
||||
<!-- <groupId>org.apache.avro</groupId>-->
|
||||
<!-- <artifactId>avro-maven-plugin</artifactId>-->
|
||||
<!-- <version>${avro.version}</version>-->
|
||||
<!-- <executions>-->
|
||||
<!-- <execution>-->
|
||||
<!-- <phase>generate-test-sources</phase>-->
|
||||
<!-- <goals>-->
|
||||
<!-- <goal>schema</goal>-->
|
||||
<!-- </goals>-->
|
||||
<!-- <configuration>-->
|
||||
<!-- <outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>-->
|
||||
<!-- <testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>-->
|
||||
<!-- <testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>-->
|
||||
<!-- </configuration>-->
|
||||
<!-- </execution>-->
|
||||
<!-- </executions>-->
|
||||
<!-- </plugin>-->
|
||||
<plugin>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro-maven-plugin</artifactId>
|
||||
<version>${avro.version}</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>generate-test-sources</phase>
|
||||
<goals>
|
||||
<goal>schema</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>
|
||||
<testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>
|
||||
<testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
@@ -18,7 +18,6 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
@@ -33,6 +32,7 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.processor.TimestampExtractor;
|
||||
import org.apache.kafka.streams.state.KeyValueStore;
|
||||
import org.apache.kafka.streams.state.StoreBuilder;
|
||||
|
||||
@@ -53,6 +53,7 @@ import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.kafka.config.KafkaStreamsConfiguration;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.messaging.MessageHeaders;
|
||||
import org.springframework.messaging.support.MessageBuilder;
|
||||
@@ -124,7 +125,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
if (parameterType.isAssignableFrom(KTable.class)) {
|
||||
String materializedAs = extendedConsumerProperties.getMaterializedAs();
|
||||
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
|
||||
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
|
||||
KTable<?, ?> table = getKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
|
||||
bindingDestination, autoOffsetReset);
|
||||
KTableBoundElementFactory.KTableWrapper kTableWrapper =
|
||||
(KTableBoundElementFactory.KTableWrapper) targetBean;
|
||||
@@ -136,7 +137,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
|
||||
String materializedAs = extendedConsumerProperties.getMaterializedAs();
|
||||
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
|
||||
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
|
||||
GlobalKTable<?, ?> table = getGlobalKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
|
||||
bindingDestination, autoOffsetReset);
|
||||
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
|
||||
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
|
||||
@@ -150,13 +151,33 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
@SuppressWarnings({"unchecked"})
|
||||
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
|
||||
ApplicationContext applicationContext, String inboundName,
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
StreamsBuilderFactoryBeanCustomizer customizer) {
|
||||
ConfigurableListableBeanFactory beanFactory = this.applicationContext
|
||||
.getBeanFactory();
|
||||
|
||||
Map<String, Object> streamConfigGlobalProperties = applicationContext
|
||||
.getBean("streamConfigGlobalProperties", Map.class);
|
||||
|
||||
if (kafkaStreamsBinderConfigurationProperties != null) {
|
||||
final Map<String, KafkaStreamsBinderConfigurationProperties.Functions> functionConfigMap = kafkaStreamsBinderConfigurationProperties.getFunctions();
|
||||
if (!CollectionUtils.isEmpty(functionConfigMap)) {
|
||||
final KafkaStreamsBinderConfigurationProperties.Functions functionConfig = functionConfigMap.get(beanNamePostPrefix);
|
||||
final Map<String, String> functionSpecificConfig = functionConfig.getConfiguration();
|
||||
if (!CollectionUtils.isEmpty(functionSpecificConfig)) {
|
||||
streamConfigGlobalProperties.putAll(functionSpecificConfig);
|
||||
}
|
||||
|
||||
String applicationId = functionConfig.getApplicationId();
|
||||
if (!StringUtils.isEmpty(applicationId)) {
|
||||
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//this is only used primarily for StreamListener based processors. Although in theory, functions can use it,
|
||||
//it is ideal for functions to use the approach used in the above if statement by using a property like
|
||||
//spring.cloud.stream.kafka.streams.binder.functions.process.configuration.num.threads (assuming that process is the function name).
|
||||
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
|
||||
.getExtendedConsumerProperties(inboundName);
|
||||
streamConfigGlobalProperties
|
||||
@@ -165,17 +186,12 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
|
||||
// override application.id if set at the individual binding level.
|
||||
// We provide this for backward compatibility with StreamListener based processors.
|
||||
// For function based processors see the next else if conditional block
|
||||
// For function based processors see the approach used above
|
||||
// (i.e. use a property like spring.cloud.stream.kafka.streams.binder.functions.process.applicationId).
|
||||
if (StringUtils.hasText(bindingLevelApplicationId)) {
|
||||
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
|
||||
bindingLevelApplicationId);
|
||||
}
|
||||
else if (kafkaStreamsBinderConfigurationProperties != null && !CollectionUtils.isEmpty(kafkaStreamsBinderConfigurationProperties.getFunctions())) {
|
||||
String applicationId = kafkaStreamsBinderConfigurationProperties.getFunctions().get(beanNamePostPrefix + ".applicationId");
|
||||
if (!StringUtils.isEmpty(applicationId)) {
|
||||
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
|
||||
}
|
||||
}
|
||||
|
||||
//If the application id is not set by any mechanism, then generate it.
|
||||
streamConfigGlobalProperties.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
|
||||
@@ -196,23 +212,15 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
concurrency);
|
||||
}
|
||||
|
||||
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
|
||||
.getBean("kafkaStreamsDlqDispatchers", Map.class);
|
||||
|
||||
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
|
||||
@Override
|
||||
public Properties asProperties() {
|
||||
Properties properties = super.asProperties();
|
||||
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
|
||||
kafkaStreamsDlqDispatchers);
|
||||
return properties;
|
||||
}
|
||||
};
|
||||
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties);
|
||||
|
||||
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
|
||||
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
|
||||
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
|
||||
this.cleanupConfig);
|
||||
if (customizer != null) {
|
||||
customizer.configure(streamsBuilder);
|
||||
}
|
||||
streamsBuilder.setAutoStartup(false);
|
||||
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
|
||||
.genericBeanDefinition(
|
||||
@@ -222,6 +230,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
|
||||
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
|
||||
|
||||
extendedConsumerProperties.setApplicationId((String) streamConfigGlobalProperties.get(StreamsConfig.APPLICATION_ID_CONFIG));
|
||||
//Removing the application ID from global properties so that the next function won't re-use it and cause race conditions.
|
||||
streamConfigGlobalProperties.remove(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
|
||||
@@ -241,16 +250,17 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
}
|
||||
}
|
||||
|
||||
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
|
||||
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
addStateStoreBeans(streamsBuilder);
|
||||
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
|
||||
if (firstBuild) {
|
||||
addStateStoreBeans(streamsBuilder);
|
||||
}
|
||||
|
||||
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
|
||||
this.bindingServiceProperties.getBindingDestination(inboundName));
|
||||
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
|
||||
KStream<?, ?> stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
|
||||
Consumed.with(keySerde, valueSerde)
|
||||
.withOffsetResetPolicy(autoOffsetReset));
|
||||
consumed);
|
||||
final boolean nativeDecoding = this.bindingServiceProperties
|
||||
.getConsumerProperties(inboundName).isUseNativeDecoding();
|
||||
if (nativeDecoding) {
|
||||
@@ -302,9 +312,11 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
}
|
||||
|
||||
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName,
|
||||
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
|
||||
|
||||
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
|
||||
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
|
||||
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset), getMaterialized(storeName, k, v));
|
||||
consumed, getMaterialized(storeName, k, v));
|
||||
}
|
||||
|
||||
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
|
||||
@@ -315,32 +327,50 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
|
||||
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
|
||||
StreamsBuilder streamsBuilder, String destination, String storeName,
|
||||
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
|
||||
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
|
||||
return streamsBuilder.globalTable(
|
||||
this.bindingServiceProperties.getBindingDestination(destination),
|
||||
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
|
||||
consumed,
|
||||
getMaterialized(storeName, k, v));
|
||||
}
|
||||
|
||||
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
|
||||
private GlobalKTable<?, ?> getGlobalKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
StreamsBuilder streamsBuilder,
|
||||
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
|
||||
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
|
||||
return materializedAs != null
|
||||
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
|
||||
materializedAs, keySerde, valueSerde, autoOffsetReset)
|
||||
materializedAs, keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
|
||||
: streamsBuilder.globalTable(bindingDestination,
|
||||
Consumed.with(keySerde, valueSerde)
|
||||
.withOffsetResetPolicy(autoOffsetReset));
|
||||
consumed);
|
||||
}
|
||||
|
||||
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
|
||||
Serde<?> valueSerde, String materializedAs, String bindingDestination,
|
||||
Topology.AutoOffsetReset autoOffsetReset) {
|
||||
private KTable<?, ?> getKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
StreamsBuilder streamsBuilder, Serde<?> keySerde,
|
||||
Serde<?> valueSerde, String materializedAs, String bindingDestination,
|
||||
Topology.AutoOffsetReset autoOffsetReset) {
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
|
||||
return materializedAs != null
|
||||
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
|
||||
keySerde, valueSerde, autoOffsetReset)
|
||||
keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
|
||||
: streamsBuilder.table(bindingDestination,
|
||||
Consumed.with(keySerde, valueSerde)
|
||||
.withOffsetResetPolicy(autoOffsetReset));
|
||||
consumed);
|
||||
}
|
||||
|
||||
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
Serde<K> keySerde, Serde<V> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
TimestampExtractor timestampExtractor = null;
|
||||
if (!StringUtils.isEmpty(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName())) {
|
||||
timestampExtractor = applicationContext.getBean(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName(),
|
||||
TimestampExtractor.class);
|
||||
}
|
||||
final Consumed<K, V> consumed = Consumed.with(keySerde, valueSerde)
|
||||
.withOffsetResetPolicy(autoOffsetReset);
|
||||
if (timestampExtractor != null) {
|
||||
consumed.withTimestampExtractor(timestampExtractor);
|
||||
}
|
||||
return consumed;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,72 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.springframework.boot.context.properties.ConfigurationPropertiesBindHandlerAdvisor;
|
||||
import org.springframework.boot.context.properties.bind.AbstractBindHandler;
|
||||
import org.springframework.boot.context.properties.bind.BindContext;
|
||||
import org.springframework.boot.context.properties.bind.BindHandler;
|
||||
import org.springframework.boot.context.properties.bind.BindResult;
|
||||
import org.springframework.boot.context.properties.bind.Bindable;
|
||||
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
|
||||
|
||||
/**
|
||||
* {@link ConfigurationPropertiesBindHandlerAdvisor} to detect nativeEncoding/Decoding settings
|
||||
* provided by the application explicitly.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @since 3.0.0
|
||||
*/
|
||||
public class EncodingDecodingBindAdviceHandler implements ConfigurationPropertiesBindHandlerAdvisor {
|
||||
|
||||
private boolean encodingSettingProvided;
|
||||
private boolean decodingSettingProvided;
|
||||
|
||||
public boolean isDecodingSettingProvided() {
|
||||
return decodingSettingProvided;
|
||||
}
|
||||
|
||||
public boolean isEncodingSettingProvided() {
|
||||
return this.encodingSettingProvided;
|
||||
}
|
||||
|
||||
@Override
|
||||
public BindHandler apply(BindHandler bindHandler) {
|
||||
BindHandler handler = new AbstractBindHandler(bindHandler) {
|
||||
@Override
|
||||
public <T> Bindable<T> onStart(ConfigurationPropertyName name,
|
||||
Bindable<T> target, BindContext context) {
|
||||
final String configName = name.toString();
|
||||
if (configName.contains("use") && configName.contains("native") &&
|
||||
(configName.contains("encoding") || configName.contains("decoding"))) {
|
||||
BindResult<T> result = context.getBinder().bind(name, target);
|
||||
if (result.isBound()) {
|
||||
if (configName.contains("encoding")) {
|
||||
EncodingDecodingBindAdviceHandler.this.encodingSettingProvided = true;
|
||||
}
|
||||
else {
|
||||
EncodingDecodingBindAdviceHandler.this.decodingSettingProvided = true;
|
||||
}
|
||||
return target.withExistingValue(result.get());
|
||||
}
|
||||
}
|
||||
return bindHandler.onStart(name, target, context);
|
||||
}
|
||||
};
|
||||
return handler;
|
||||
}
|
||||
}
|
||||
@@ -16,8 +16,6 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
|
||||
import org.springframework.cloud.stream.binder.AbstractBinder;
|
||||
@@ -54,8 +52,6 @@ public class GlobalKTableBinder extends
|
||||
|
||||
private final KafkaTopicProvisioner kafkaTopicProvisioner;
|
||||
|
||||
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
|
||||
|
||||
// @checkstyle:off
|
||||
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
|
||||
|
||||
@@ -63,11 +59,9 @@ public class GlobalKTableBinder extends
|
||||
|
||||
public GlobalKTableBinder(
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner) {
|
||||
this.binderConfigurationProperties = binderConfigurationProperties;
|
||||
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
|
||||
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -76,12 +70,11 @@ public class GlobalKTableBinder extends
|
||||
String group, GlobalKTable<Object, Object> inputTarget,
|
||||
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
|
||||
if (!StringUtils.hasText(group)) {
|
||||
group = this.binderConfigurationProperties.getApplicationId();
|
||||
group = properties.getExtension().getApplicationId();
|
||||
}
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties,
|
||||
this.kafkaStreamsDlqDispatchers);
|
||||
this.binderConfigurationProperties, properties);
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
}
|
||||
|
||||
|
||||
@@ -23,7 +23,6 @@ import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.annotation.BindingProvider;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -39,7 +38,6 @@ import org.springframework.context.annotation.Import;
|
||||
* @since 2.1.0
|
||||
*/
|
||||
@Configuration
|
||||
@BindingProvider
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class })
|
||||
public class GlobalKTableBinderConfiguration {
|
||||
@@ -56,9 +54,9 @@ public class GlobalKTableBinderConfiguration {
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
|
||||
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
|
||||
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
|
||||
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
|
||||
kafkaTopicProvisioner);
|
||||
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
|
||||
kafkaStreamsExtendedBindingProperties);
|
||||
return globalKTableBinder;
|
||||
|
||||
@@ -40,10 +40,13 @@ public class GlobalKTableBoundElementFactory
|
||||
extends AbstractBindingTargetFactory<GlobalKTable> {
|
||||
|
||||
private final BindingServiceProperties bindingServiceProperties;
|
||||
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
|
||||
|
||||
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
|
||||
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
|
||||
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
super(GlobalKTable.class);
|
||||
this.bindingServiceProperties = bindingServiceProperties;
|
||||
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -54,6 +57,11 @@ public class GlobalKTableBoundElementFactory
|
||||
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
else {
|
||||
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
}
|
||||
// Always set multiplex to true in the kafka streams binder
|
||||
consumerProperties.setMultiplex(true);
|
||||
|
||||
@@ -90,8 +98,9 @@ public class GlobalKTableBoundElementFactory
|
||||
|
||||
public void wrap(GlobalKTable<Object, Object> delegate) {
|
||||
Assert.notNull(delegate, "delegate cannot be null");
|
||||
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
|
||||
this.delegate = delegate;
|
||||
if (this.delegate == null) {
|
||||
this.delegate = delegate;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -16,8 +16,10 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Iterator;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
@@ -84,19 +86,24 @@ public class InteractiveQueryService {
|
||||
retryTemplate.setRetryPolicy(retryPolicy);
|
||||
|
||||
return retryTemplate.execute(context -> {
|
||||
T store;
|
||||
for (KafkaStreams kafkaStream : InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
T store = null;
|
||||
|
||||
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
|
||||
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
|
||||
Throwable throwable = null;
|
||||
while (iterator.hasNext()) {
|
||||
try {
|
||||
store = kafkaStream.store(storeName, storeType);
|
||||
if (store != null) {
|
||||
return store;
|
||||
}
|
||||
store = iterator.next().store(storeName, storeType);
|
||||
}
|
||||
catch (InvalidStateStoreException e) {
|
||||
LOG.warn("Error when retrieving state store: " + storeName, e);
|
||||
// pass through..
|
||||
throwable = e;
|
||||
}
|
||||
}
|
||||
throw new IllegalStateException("Error when retrieving state store: " + storeName);
|
||||
if (store != null) {
|
||||
return store;
|
||||
}
|
||||
throw new IllegalStateException("Error when retrieving state store: j " + storeName, throwable);
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -16,14 +16,13 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Produced;
|
||||
import org.apache.kafka.streams.processor.StreamPartitioner;
|
||||
|
||||
import org.springframework.aop.framework.Advised;
|
||||
import org.springframework.cloud.stream.binder.AbstractBinder;
|
||||
@@ -75,20 +74,16 @@ class KStreamBinder extends
|
||||
|
||||
private final KeyValueSerdeResolver keyValueSerdeResolver;
|
||||
|
||||
private final Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
|
||||
|
||||
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KeyValueSerdeResolver keyValueSerdeResolver,
|
||||
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
KeyValueSerdeResolver keyValueSerdeResolver) {
|
||||
this.binderConfigurationProperties = binderConfigurationProperties;
|
||||
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
|
||||
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
|
||||
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
|
||||
this.keyValueSerdeResolver = keyValueSerdeResolver;
|
||||
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -102,12 +97,11 @@ class KStreamBinder extends
|
||||
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(delegate, properties.getExtension());
|
||||
|
||||
if (!StringUtils.hasText(group)) {
|
||||
group = this.binderConfigurationProperties.getApplicationId();
|
||||
group = properties.getExtension().getApplicationId();
|
||||
}
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties,
|
||||
this.kafkaStreamsDlqDispatchers);
|
||||
this.binderConfigurationProperties, properties);
|
||||
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
}
|
||||
@@ -117,10 +111,11 @@ class KStreamBinder extends
|
||||
protected Binding<KStream<Object, Object>> doBindProducer(String name,
|
||||
KStream<Object, Object> outboundBindTarget,
|
||||
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
|
||||
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
|
||||
properties.getExtension());
|
||||
this.kafkaTopicProvisioner.provisionProducerDestination(name,
|
||||
extendedProducerProperties);
|
||||
|
||||
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties =
|
||||
(ExtendedProducerProperties) properties;
|
||||
|
||||
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
|
||||
Serde<?> keySerde = this.keyValueSerdeResolver
|
||||
.getOuboundKeySerde(properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable());
|
||||
LOG.info("Key Serde used for (outbound) " + name + ": " + keySerde.getClass().getName());
|
||||
@@ -133,28 +128,39 @@ class KStreamBinder extends
|
||||
else {
|
||||
valueSerde = Serdes.ByteArray();
|
||||
}
|
||||
LOG.info("Key Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
|
||||
LOG.info("Value Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
|
||||
|
||||
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
|
||||
(Serde<Object>) keySerde, (Serde<Object>) valueSerde);
|
||||
(Serde<Object>) keySerde, (Serde<Object>) valueSerde, properties.getExtension());
|
||||
return new DefaultBinding<>(name, null, outboundBindTarget, null);
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private void to(boolean isNativeEncoding, String name,
|
||||
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
|
||||
Serde<Object> valueSerde) {
|
||||
KStream<Object, Object> outboundBindTarget, Serde<Object> keySerde,
|
||||
Serde<Object> valueSerde, KafkaStreamsProducerProperties properties) {
|
||||
final Produced<Object, Object> produced = Produced.with(keySerde, valueSerde);
|
||||
StreamPartitioner streamPartitioner = null;
|
||||
if (!StringUtils.isEmpty(properties.getStreamPartitionerBeanName())) {
|
||||
streamPartitioner = getApplicationContext().getBean(properties.getStreamPartitionerBeanName(),
|
||||
StreamPartitioner.class);
|
||||
}
|
||||
if (streamPartitioner != null) {
|
||||
produced.withStreamPartitioner(streamPartitioner);
|
||||
}
|
||||
if (!isNativeEncoding) {
|
||||
LOG.info("Native encoding is disabled for " + name
|
||||
+ ". Outbound message conversion done by Spring Cloud Stream.");
|
||||
outboundBindTarget.filter((k, v) -> v == null)
|
||||
.to(name, produced);
|
||||
this.kafkaStreamsMessageConversionDelegate
|
||||
.serializeOnOutbound(outboundBindTarget)
|
||||
.to(name, Produced.with(keySerde, valueSerde));
|
||||
.to(name, produced);
|
||||
}
|
||||
else {
|
||||
LOG.info("Native encoding is enabled for " + name
|
||||
+ ". Outbound serialization done at the broker.");
|
||||
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
|
||||
outboundBindTarget.to(name, produced);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,14 +16,10 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.annotation.BindingProvider;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -42,7 +38,6 @@ import org.springframework.context.annotation.Import;
|
||||
@Configuration
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class })
|
||||
@BindingProvider
|
||||
public class KStreamBinderConfiguration {
|
||||
|
||||
@Bean
|
||||
@@ -60,12 +55,10 @@ public class KStreamBinderConfiguration {
|
||||
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KeyValueSerdeResolver keyValueSerdeResolver,
|
||||
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
|
||||
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
|
||||
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties,
|
||||
kafkaTopicProvisioner, KafkaStreamsMessageConversionDelegate,
|
||||
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver,
|
||||
kafkaStreamsDlqDispatchers);
|
||||
KafkaStreamsBindingInformationCatalogue, keyValueSerdeResolver);
|
||||
kStreamBinder.setKafkaStreamsExtendedBindingProperties(
|
||||
kafkaStreamsExtendedBindingProperties);
|
||||
return kStreamBinder;
|
||||
|
||||
@@ -43,12 +43,15 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
|
||||
private final BindingServiceProperties bindingServiceProperties;
|
||||
|
||||
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
|
||||
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
|
||||
|
||||
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
super(KStream.class);
|
||||
this.bindingServiceProperties = bindingServiceProperties;
|
||||
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
|
||||
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -59,6 +62,11 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
|
||||
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
else {
|
||||
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
}
|
||||
// Always set multiplex to true in the kafka streams binder
|
||||
consumerProperties.setMultiplex(true);
|
||||
return createProxyForKStream(name);
|
||||
@@ -74,6 +82,11 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
|
||||
producerProperties = this.bindingServiceProperties.getProducerProperties(name);
|
||||
producerProperties.setUseNativeEncoding(true);
|
||||
}
|
||||
else {
|
||||
if (!encodingDecodingBindAdviceHandler.isEncodingSettingProvided()) {
|
||||
producerProperties.setUseNativeEncoding(true);
|
||||
}
|
||||
}
|
||||
return createProxyForKStream(name);
|
||||
}
|
||||
|
||||
@@ -109,8 +122,9 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
|
||||
|
||||
public void wrap(KStream<Object, Object> delegate) {
|
||||
Assert.notNull(delegate, "delegate cannot be null");
|
||||
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
|
||||
this.delegate = delegate;
|
||||
if (this.delegate == null) {
|
||||
this.delegate = delegate;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -16,8 +16,6 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
|
||||
import org.springframework.cloud.stream.binder.AbstractBinder;
|
||||
@@ -55,19 +53,15 @@ class KTableBinder extends
|
||||
|
||||
private final KafkaTopicProvisioner kafkaTopicProvisioner;
|
||||
|
||||
private Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers;
|
||||
|
||||
// @checkstyle:off
|
||||
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
|
||||
|
||||
// @checkstyle:on
|
||||
|
||||
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner) {
|
||||
this.binderConfigurationProperties = binderConfigurationProperties;
|
||||
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
|
||||
this.kafkaStreamsDlqDispatchers = kafkaStreamsDlqDispatchers;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -78,12 +72,11 @@ class KTableBinder extends
|
||||
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
|
||||
// @checkstyle:on
|
||||
if (!StringUtils.hasText(group)) {
|
||||
group = this.binderConfigurationProperties.getApplicationId();
|
||||
group = properties.getExtension().getApplicationId();
|
||||
}
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties,
|
||||
this.kafkaStreamsDlqDispatchers);
|
||||
this.binderConfigurationProperties, properties);
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
}
|
||||
|
||||
|
||||
@@ -23,7 +23,6 @@ import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.annotation.BindingProvider;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -39,7 +38,6 @@ import org.springframework.context.annotation.Import;
|
||||
*/
|
||||
@SuppressWarnings("ALL")
|
||||
@Configuration
|
||||
@BindingProvider
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class })
|
||||
public class KTableBinderConfiguration {
|
||||
@@ -56,9 +54,9 @@ public class KTableBinderConfiguration {
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
|
||||
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
|
||||
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
|
||||
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
|
||||
kafkaTopicProvisioner);
|
||||
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
|
||||
return kTableBinder;
|
||||
}
|
||||
|
||||
@@ -38,10 +38,13 @@ import org.springframework.util.Assert;
|
||||
class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
|
||||
|
||||
private final BindingServiceProperties bindingServiceProperties;
|
||||
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
|
||||
|
||||
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
|
||||
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
|
||||
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
super(KTable.class);
|
||||
this.bindingServiceProperties = bindingServiceProperties;
|
||||
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -52,6 +55,11 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
|
||||
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
else {
|
||||
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
|
||||
consumerProperties.setUseNativeDecoding(true);
|
||||
}
|
||||
}
|
||||
// Always set multiplex to true in the kafka streams binder
|
||||
consumerProperties.setMultiplex(true);
|
||||
|
||||
@@ -86,8 +94,9 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
|
||||
|
||||
public void wrap(KTable<Object, Object> delegate) {
|
||||
Assert.notNull(delegate, "delegate cannot be null");
|
||||
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
|
||||
this.delegate = delegate;
|
||||
if (this.delegate == null) {
|
||||
this.delegate = delegate;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
/*
|
||||
* Copyright 2017-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.apache.kafka.streams.kstream.TimeWindows;
|
||||
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.boot.context.properties.EnableConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
/**
|
||||
* Application support configuration for Kafka Streams binder.
|
||||
*
|
||||
* @deprecated Features provided on this class can be directly configured in the application itself using Kafka Streams.
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
@Configuration
|
||||
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
|
||||
@Deprecated
|
||||
public class KafkaStreamsApplicationSupportAutoConfiguration {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
|
||||
public TimeWindows configuredTimeWindow(
|
||||
KafkaStreamsApplicationSupportProperties processorProperties) {
|
||||
return processorProperties.getTimeWindow().getAdvanceBy() > 0
|
||||
? TimeWindows.of(processorProperties.getTimeWindow().getLength())
|
||||
.advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
|
||||
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
|
||||
}
|
||||
|
||||
}
|
||||
@@ -16,52 +16,143 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.clients.admin.AdminClient;
|
||||
import org.apache.kafka.clients.admin.ListTopicsResult;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.processor.TaskMetadata;
|
||||
import org.apache.kafka.streams.processor.ThreadMetadata;
|
||||
|
||||
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
|
||||
import org.springframework.boot.actuate.health.Health;
|
||||
import org.springframework.boot.actuate.health.Status;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
|
||||
/**
|
||||
* Health indicator for Kafka Streams.
|
||||
*
|
||||
* @author Arnaud Jardiné
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator {
|
||||
|
||||
private final KafkaStreamsRegistry kafkaStreamsRegistry;
|
||||
private final Log logger = LogFactory.getLog(getClass());
|
||||
|
||||
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry) {
|
||||
private final KafkaStreamsRegistry kafkaStreamsRegistry;
|
||||
private final KafkaStreamsBinderConfigurationProperties configurationProperties;
|
||||
|
||||
private final Map<String, Object> adminClientProperties;
|
||||
|
||||
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
|
||||
|
||||
private static final ThreadLocal<Status> healthStatusThreadLocal = new ThreadLocal<>();
|
||||
|
||||
private AdminClient adminClient;
|
||||
|
||||
KafkaStreamsBinderHealthIndicator(KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties,
|
||||
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
|
||||
super("Kafka-streams health check failed");
|
||||
kafkaProperties.buildAdminProperties();
|
||||
this.configurationProperties = kafkaStreamsBinderConfigurationProperties;
|
||||
this.adminClientProperties = kafkaProperties.buildAdminProperties();
|
||||
KafkaTopicProvisioner.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
|
||||
kafkaStreamsBinderConfigurationProperties);
|
||||
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
|
||||
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doHealthCheck(Health.Builder builder) throws Exception {
|
||||
boolean up = true;
|
||||
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
up &= kStream.state().isRunning();
|
||||
builder.withDetails(buildDetails(kStream));
|
||||
try {
|
||||
initAdminClient();
|
||||
synchronized (this.adminClient) {
|
||||
final Status status = healthStatusThreadLocal.get();
|
||||
//If one of the kafka streams binders (kstream, ktable, globalktable) was down before on the same request,
|
||||
//retrieve that from the thead local storage where it was saved before. This is done in order to avoid
|
||||
//the duration of the total health check since in the case of Kafka Streams each binder tries to do
|
||||
//its own health check and since we already know that this is DOWN, simply pass that information along.
|
||||
if (status == Status.DOWN) {
|
||||
builder.withDetail("No topic information available", "Kafka broker is not reachable");
|
||||
builder.status(Status.DOWN);
|
||||
}
|
||||
else {
|
||||
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
|
||||
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
|
||||
|
||||
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
|
||||
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
|
||||
builder.status(Status.UNKNOWN);
|
||||
}
|
||||
else {
|
||||
boolean up = true;
|
||||
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
up &= kStream.state().isRunning();
|
||||
builder.withDetails(buildDetails(kStream));
|
||||
}
|
||||
builder.status(up ? Status.UP : Status.DOWN);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
builder.withDetail("No topic information available", "Kafka broker is not reachable");
|
||||
builder.status(Status.DOWN);
|
||||
builder.withException(e);
|
||||
//Store binder down status into a thread local storage.
|
||||
healthStatusThreadLocal.set(Status.DOWN);
|
||||
}
|
||||
finally {
|
||||
// Close admin client immediately.
|
||||
if (adminClient != null) {
|
||||
adminClient.close(Duration.ofSeconds(0));
|
||||
}
|
||||
}
|
||||
builder.status(up ? Status.UP : Status.DOWN);
|
||||
}
|
||||
|
||||
private static Map<String, Object> buildDetails(KafkaStreams kStreams) {
|
||||
private synchronized AdminClient initAdminClient() {
|
||||
if (this.adminClient == null) {
|
||||
this.adminClient = AdminClient.create(this.adminClientProperties);
|
||||
}
|
||||
return this.adminClient;
|
||||
}
|
||||
|
||||
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) {
|
||||
final Map<String, Object> details = new HashMap<>();
|
||||
if (kStreams.state().isRunning()) {
|
||||
for (ThreadMetadata metadata : kStreams.localThreadsMetadata()) {
|
||||
details.put("threadName", metadata.threadName());
|
||||
details.put("threadState", metadata.threadState());
|
||||
details.put("activeTasks", taskDetails(metadata.activeTasks()));
|
||||
details.put("standbyTasks", taskDetails(metadata.standbyTasks()));
|
||||
final Map<String, Object> perAppdIdDetails = new HashMap<>();
|
||||
|
||||
if (kafkaStreams.state().isRunning()) {
|
||||
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
|
||||
perAppdIdDetails.put("threadName", metadata.threadName());
|
||||
perAppdIdDetails.put("threadState", metadata.threadState());
|
||||
perAppdIdDetails.put("adminClientId", metadata.adminClientId());
|
||||
perAppdIdDetails.put("consumerClientId", metadata.consumerClientId());
|
||||
perAppdIdDetails.put("restoreConsumerClientId", metadata.restoreConsumerClientId());
|
||||
perAppdIdDetails.put("producerClientIds", metadata.producerClientIds());
|
||||
perAppdIdDetails.put("activeTasks", taskDetails(metadata.activeTasks()));
|
||||
perAppdIdDetails.put("standbyTasks", taskDetails(metadata.standbyTasks()));
|
||||
}
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
details.put(applicationId, perAppdIdDetails);
|
||||
}
|
||||
else {
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsRegistry.streamBuilderFactoryBean(kafkaStreams);
|
||||
final String applicationId = (String) streamsBuilderFactoryBean.getStreamsConfiguration().get(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
details.put(applicationId, String.format("The processor with application.id %s is down", applicationId));
|
||||
}
|
||||
return details;
|
||||
}
|
||||
|
||||
@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@@ -35,8 +37,11 @@ class KafkaStreamsBinderHealthIndicatorConfiguration {
|
||||
@Bean
|
||||
@ConditionalOnBean(KafkaStreamsRegistry.class)
|
||||
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry) {
|
||||
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry);
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry, KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties, KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
|
||||
|
||||
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry, kafkaStreamsBinderConfigurationProperties,
|
||||
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -0,0 +1,113 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.HashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.function.ToDoubleFunction;
|
||||
|
||||
import io.micrometer.core.instrument.Gauge;
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.binder.MeterBinder;
|
||||
import org.apache.kafka.common.Metric;
|
||||
import org.apache.kafka.common.MetricName;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
|
||||
/**
|
||||
* Kafka Streams binder metrics implementation that exports the metrics available
|
||||
* through {@link KafkaStreams#metrics()} into a micrometer {@link io.micrometer.core.instrument.MeterRegistry}.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @since 3.0.0
|
||||
*/
|
||||
public class KafkaStreamsBinderMetrics {
|
||||
|
||||
private final MeterRegistry meterRegistry;
|
||||
|
||||
private MeterBinder meterBinder;
|
||||
|
||||
public KafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
|
||||
this.meterRegistry = meterRegistry;
|
||||
}
|
||||
|
||||
public void bindTo(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans, MeterRegistry meterRegistry) {
|
||||
|
||||
if (this.meterBinder == null) {
|
||||
this.meterBinder = new MeterBinder() {
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void bindTo(MeterRegistry registry) {
|
||||
if (streamsBuilderFactoryBeans != null) {
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
|
||||
|
||||
Set<String> meterNames = new HashSet<>();
|
||||
|
||||
for (Map.Entry<MetricName, ? extends Metric> metric : metrics.entrySet()) {
|
||||
final String sanitized = sanitize(metric.getKey().group() + "." + metric.getKey().name());
|
||||
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
|
||||
final String name = streamsBuilderFactoryBeans.size() > 1 ? applicationId + "." + sanitized : sanitized;
|
||||
|
||||
final Gauge.Builder<KafkaStreamsBinderMetrics> builder =
|
||||
Gauge.builder(name, this,
|
||||
toDoubleFunction(metric.getValue()));
|
||||
final Map<String, String> tags = metric.getKey().tags();
|
||||
for (Map.Entry<String, String> tag : tags.entrySet()) {
|
||||
builder.tag(tag.getKey(), tag.getValue());
|
||||
}
|
||||
if (!meterNames.contains(name)) {
|
||||
builder.description(metric.getKey().description())
|
||||
.register(meterRegistry);
|
||||
meterNames.add(name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
ToDoubleFunction toDoubleFunction(Metric metric) {
|
||||
return (o) -> {
|
||||
if (metric.metricValue() instanceof Number) {
|
||||
return (Double) metric.metricValue();
|
||||
}
|
||||
else {
|
||||
return 0.0;
|
||||
}
|
||||
};
|
||||
}
|
||||
};
|
||||
}
|
||||
this.meterBinder.bindTo(this.meterRegistry);
|
||||
}
|
||||
|
||||
private static String sanitize(String value) {
|
||||
return value.replaceAll("-", ".");
|
||||
}
|
||||
|
||||
public void addMetrics(Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans) {
|
||||
synchronized (KafkaStreamsBinderMetrics.this) {
|
||||
this.bindTo(streamsBuilderFactoryBeans, this.meterRegistry);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -23,6 +23,7 @@ import java.util.Map;
|
||||
import java.util.Properties;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
|
||||
@@ -32,6 +33,7 @@ import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
@@ -48,13 +50,20 @@ import org.springframework.cloud.stream.config.BinderProperties;
|
||||
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
|
||||
import org.springframework.cloud.stream.config.BindingServiceProperties;
|
||||
import org.springframework.cloud.stream.function.StreamFunctionProperties;
|
||||
import org.springframework.context.ApplicationContext;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Conditional;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.core.env.ConfigurableEnvironment;
|
||||
import org.springframework.core.env.Environment;
|
||||
import org.springframework.core.env.MapPropertySource;
|
||||
import org.springframework.integration.context.IntegrationContextUtils;
|
||||
import org.springframework.kafka.config.KafkaStreamsConfiguration;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
|
||||
import org.springframework.lang.Nullable;
|
||||
import org.springframework.messaging.converter.CompositeMessageConverter;
|
||||
import org.springframework.util.ObjectUtils;
|
||||
import org.springframework.util.StringUtils;
|
||||
@@ -66,6 +75,7 @@ import org.springframework.util.StringUtils;
|
||||
* @author Soby Chacko
|
||||
* @author Gary Russell
|
||||
*/
|
||||
@Configuration
|
||||
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
|
||||
@ConditionalOnBean(BindingService.class)
|
||||
@AutoConfigureAfter(BindingServiceConfiguration.class)
|
||||
@@ -152,7 +162,8 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
@Bean("streamConfigGlobalProperties")
|
||||
public Map<String, Object> streamConfigGlobalProperties(
|
||||
KafkaStreamsBinderConfigurationProperties configProperties,
|
||||
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment) {
|
||||
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment,
|
||||
SendToDlqAndContinue sendToDlqAndContinue) {
|
||||
|
||||
Properties properties = kafkaStreamsConfiguration.asProperties();
|
||||
|
||||
@@ -213,19 +224,20 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
|
||||
properties.put(
|
||||
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
|
||||
LogAndContinueExceptionHandler.class.getName());
|
||||
LogAndContinueExceptionHandler.class);
|
||||
}
|
||||
else if (configProperties
|
||||
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
|
||||
properties.put(
|
||||
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
|
||||
LogAndFailExceptionHandler.class.getName());
|
||||
LogAndFailExceptionHandler.class);
|
||||
}
|
||||
else if (configProperties
|
||||
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
|
||||
properties.put(
|
||||
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
|
||||
SendToDlqAndContinue.class.getName());
|
||||
RecoveringDeserializationExceptionHandler.class);
|
||||
properties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
|
||||
}
|
||||
|
||||
if (!ObjectUtils.isEmpty(configProperties.getConfiguration())) {
|
||||
@@ -257,54 +269,59 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
|
||||
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
|
||||
ObjectProvider<CleanupConfig> cleanupConfig) {
|
||||
ObjectProvider<CleanupConfig> cleanupConfig,
|
||||
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider) {
|
||||
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(
|
||||
bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
|
||||
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
|
||||
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters,
|
||||
cleanupConfig.getIfUnique());
|
||||
cleanupConfig.getIfUnique(), customizerProvider.getIfUnique());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
|
||||
CompositeMessageConverter compositeMessageConverter,
|
||||
SendToDlqAndContinue sendToDlqAndContinue,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
|
||||
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
|
||||
CompositeMessageConverter compositeMessageConverter,
|
||||
SendToDlqAndContinue sendToDlqAndContinue,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
|
||||
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverter, sendToDlqAndContinue,
|
||||
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public MessageConverterDelegateSerde messageConverterDelegateSerde(
|
||||
CompositeMessageConverter compositeMessageConverterFactory) {
|
||||
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
|
||||
CompositeMessageConverter compositeMessageConverterFactory) {
|
||||
return new MessageConverterDelegateSerde(compositeMessageConverterFactory);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public CompositeNonNativeSerde compositeNonNativeSerde(
|
||||
CompositeMessageConverter compositeMessageConverterFactory) {
|
||||
@Qualifier(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
|
||||
CompositeMessageConverter compositeMessageConverterFactory) {
|
||||
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KStreamBoundElementFactory kStreamBoundElementFactory(
|
||||
BindingServiceProperties bindingServiceProperties,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
return new KStreamBoundElementFactory(bindingServiceProperties,
|
||||
KafkaStreamsBindingInformationCatalogue);
|
||||
KafkaStreamsBindingInformationCatalogue, encodingDecodingBindAdviceHandler);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KTableBoundElementFactory kTableBoundElementFactory(
|
||||
BindingServiceProperties bindingServiceProperties) {
|
||||
return new KTableBoundElementFactory(bindingServiceProperties);
|
||||
BindingServiceProperties bindingServiceProperties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
return new KTableBoundElementFactory(bindingServiceProperties, encodingDecodingBindAdviceHandler);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(
|
||||
BindingServiceProperties properties) {
|
||||
return new GlobalKTableBoundElementFactory(properties);
|
||||
BindingServiceProperties properties, EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
|
||||
return new GlobalKTableBoundElementFactory(properties, encodingDecodingBindAdviceHandler);
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -342,13 +359,9 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
@Bean
|
||||
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(
|
||||
KafkaStreamsBindingInformationCatalogue catalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry) {
|
||||
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry);
|
||||
}
|
||||
|
||||
@Bean("kafkaStreamsDlqDispatchers")
|
||||
public Map<String, KafkaStreamsDlqDispatch> dlqDispatchers() {
|
||||
return new HashMap<>();
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics) {
|
||||
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics);
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -360,10 +373,45 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
|
||||
ObjectProvider<CleanupConfig> cleanupConfig,
|
||||
StreamFunctionProperties streamFunctionProperties,
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
ObjectProvider<StreamsBuilderFactoryBeanCustomizer> customizerProvider) {
|
||||
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
|
||||
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
|
||||
cleanupConfig.getIfUnique(), streamFunctionProperties, kafkaStreamsBinderConfigurationProperties);
|
||||
cleanupConfig.getIfUnique(), streamFunctionProperties, kafkaStreamsBinderConfigurationProperties,
|
||||
customizerProvider.getIfUnique());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler() {
|
||||
return new EncodingDecodingBindAdviceHandler();
|
||||
}
|
||||
|
||||
@Configuration
|
||||
@ConditionalOnMissingBean(value = KafkaStreamsBinderMetrics.class, name = "outerContext")
|
||||
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
|
||||
protected class KafkaStreamsBinderMetricsConfiguration {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnBean(MeterRegistry.class)
|
||||
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
|
||||
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(MeterRegistry meterRegistry) {
|
||||
|
||||
return new KafkaStreamsBinderMetrics(meterRegistry);
|
||||
}
|
||||
}
|
||||
|
||||
@Configuration
|
||||
@ConditionalOnBean(name = "outerContext")
|
||||
@ConditionalOnMissingBean(KafkaStreamsBinderMetrics.class)
|
||||
@ConditionalOnClass(name = "io.micrometer.core.instrument.MeterRegistry")
|
||||
protected class KafkaStreamsBinderMetricsConfigurationWithMultiBinder {
|
||||
|
||||
@Bean
|
||||
public KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics(ConfigurableApplicationContext context) {
|
||||
|
||||
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
|
||||
.getBean(MeterRegistry.class);
|
||||
return new KafkaStreamsBinderMetrics(meterRegistry);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,26 +16,46 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.function.BiFunction;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
import org.apache.kafka.common.TopicPartition;
|
||||
import org.apache.kafka.common.serialization.ByteArraySerializer;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
|
||||
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.context.ApplicationContext;
|
||||
import org.springframework.core.MethodParameter;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
|
||||
import org.springframework.util.ObjectUtils;
|
||||
import org.springframework.util.StringUtils;
|
||||
|
||||
/**
|
||||
* Common methods used by various Kafka Streams types across the binders.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @author Gary Russell
|
||||
*/
|
||||
final class KafkaStreamsBinderUtils {
|
||||
|
||||
private static final Log LOGGER = LogFactory.getLog(KafkaStreamsBinderUtils.class);
|
||||
|
||||
private KafkaStreamsBinderUtils() {
|
||||
|
||||
}
|
||||
@@ -43,10 +63,10 @@ final class KafkaStreamsBinderUtils {
|
||||
static void prepareConsumerBinding(String name, String group,
|
||||
ApplicationContext context, KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
|
||||
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
|
||||
properties.getExtension());
|
||||
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties =
|
||||
(ExtendedConsumerProperties) properties;
|
||||
if (binderConfigurationProperties
|
||||
.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
|
||||
extendedConsumerProperties.getExtension().setEnableDlq(true);
|
||||
@@ -59,33 +79,86 @@ final class KafkaStreamsBinderUtils {
|
||||
}
|
||||
|
||||
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
|
||||
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils
|
||||
|
||||
Map<String, DlqPartitionFunction> partitionFunctions =
|
||||
context.getBeansOfType(DlqPartitionFunction.class, false, false);
|
||||
boolean oneFunctionPresent = partitionFunctions.size() == 1;
|
||||
Integer dlqPartitions = extendedConsumerProperties.getExtension().getDlqPartitions();
|
||||
DlqPartitionFunction partitionFunction = oneFunctionPresent
|
||||
? partitionFunctions.values().iterator().next()
|
||||
: DlqPartitionFunction.determineFallbackFunction(dlqPartitions, LOGGER);
|
||||
|
||||
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
|
||||
new ExtendedProducerProperties<>(
|
||||
extendedConsumerProperties.getExtension().getDlqProducerProperties()),
|
||||
binderConfigurationProperties);
|
||||
KafkaTemplate<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
|
||||
|
||||
|
||||
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver =
|
||||
(cr, e) -> new TopicPartition(extendedConsumerProperties.getExtension().getDlqName(),
|
||||
partitionFunction.apply(group, cr, e));
|
||||
DeadLetterPublishingRecoverer kafkaStreamsBinderDlqRecoverer = !StringUtils
|
||||
.isEmpty(extendedConsumerProperties.getExtension().getDlqName())
|
||||
? new KafkaStreamsDlqDispatch(
|
||||
extendedConsumerProperties.getExtension()
|
||||
.getDlqName(),
|
||||
binderConfigurationProperties,
|
||||
extendedConsumerProperties.getExtension())
|
||||
: null;
|
||||
? new DeadLetterPublishingRecoverer(kafkaTemplate, destinationResolver)
|
||||
: null;
|
||||
for (String inputTopic : inputTopics) {
|
||||
if (StringUtils.isEmpty(
|
||||
extendedConsumerProperties.getExtension().getDlqName())) {
|
||||
String dlqName = "error." + inputTopic + "." + group;
|
||||
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName,
|
||||
binderConfigurationProperties,
|
||||
extendedConsumerProperties.getExtension());
|
||||
destinationResolver = (cr, e) -> new TopicPartition("error." + inputTopic + "." + group,
|
||||
partitionFunction.apply(group, cr, e));
|
||||
kafkaStreamsBinderDlqRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
|
||||
destinationResolver);
|
||||
}
|
||||
|
||||
SendToDlqAndContinue sendToDlqAndContinue = context
|
||||
.getBean(SendToDlqAndContinue.class);
|
||||
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic,
|
||||
kafkaStreamsDlqDispatch);
|
||||
|
||||
kafkaStreamsDlqDispatchers.put(inputTopic, kafkaStreamsDlqDispatch);
|
||||
kafkaStreamsBinderDlqRecoverer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
|
||||
KafkaBinderConfigurationProperties configurationProperties) {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ProducerConfig.RETRIES_CONFIG, 0);
|
||||
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
|
||||
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
|
||||
Map<String, Object> mergedConfig = configurationProperties
|
||||
.mergedProducerConfiguration();
|
||||
if (!ObjectUtils.isEmpty(mergedConfig)) {
|
||||
props.putAll(mergedConfig);
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
|
||||
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
configurationProperties.getKafkaConnectionString());
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
|
||||
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
|
||||
String.valueOf(producerProperties.getExtension().getBufferSize()));
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
|
||||
props.put(ProducerConfig.LINGER_MS_CONFIG,
|
||||
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
|
||||
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
|
||||
producerProperties.getExtension().getCompressionType().toString());
|
||||
}
|
||||
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
|
||||
props.putAll(producerProperties.getExtension().getConfiguration());
|
||||
}
|
||||
// Always send as byte[] on dlq (the same byte[] that the consumer received)
|
||||
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
|
||||
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
ByteArraySerializer.class);
|
||||
|
||||
return new DefaultKafkaProducerFactory<>(props);
|
||||
}
|
||||
|
||||
|
||||
static boolean supportsKStream(MethodParameter methodParameter, Class<?> targetBeanClass) {
|
||||
return KStream.class.isAssignableFrom(targetBeanClass)
|
||||
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
|
||||
|
||||
@@ -1,152 +0,0 @@
|
||||
/*
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
import org.apache.kafka.clients.producer.ProducerRecord;
|
||||
import org.apache.kafka.common.serialization.ByteArraySerializer;
|
||||
|
||||
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.support.SendResult;
|
||||
import org.springframework.util.ObjectUtils;
|
||||
import org.springframework.util.concurrent.ListenableFuture;
|
||||
import org.springframework.util.concurrent.ListenableFutureCallback;
|
||||
|
||||
/**
|
||||
* Send records in error to a DLQ.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @author Rafal Zukowski
|
||||
* @author Gary Russell
|
||||
*/
|
||||
class KafkaStreamsDlqDispatch {
|
||||
|
||||
private final Log logger = LogFactory.getLog(getClass());
|
||||
|
||||
private final KafkaTemplate<byte[], byte[]> kafkaTemplate;
|
||||
|
||||
private final String dlqName;
|
||||
|
||||
KafkaStreamsDlqDispatch(String dlqName,
|
||||
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
|
||||
KafkaConsumerProperties kafkaConsumerProperties) {
|
||||
ProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
|
||||
new ExtendedProducerProperties<>(
|
||||
kafkaConsumerProperties.getDlqProducerProperties()),
|
||||
kafkaBinderConfigurationProperties);
|
||||
|
||||
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
|
||||
this.dlqName = dlqName;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public void sendToDlq(byte[] key, byte[] value, int partittion) {
|
||||
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(this.dlqName,
|
||||
partittion, key, value, null);
|
||||
|
||||
StringBuilder sb = new StringBuilder().append(" a message with key='")
|
||||
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
|
||||
.append(" and payload='")
|
||||
.append(toDisplayString(ObjectUtils.nullSafeToString(value))).append("'")
|
||||
.append(" received from ").append(partittion);
|
||||
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = null;
|
||||
try {
|
||||
sentDlq = this.kafkaTemplate.send(producerRecord);
|
||||
sentDlq.addCallback(
|
||||
new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable ex) {
|
||||
KafkaStreamsDlqDispatch.this.logger
|
||||
.error("Error sending to DLQ " + sb.toString(), ex);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onSuccess(SendResult<byte[], byte[]> result) {
|
||||
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
|
||||
KafkaStreamsDlqDispatch.this.logger
|
||||
.debug("Sent to DLQ " + sb.toString());
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
catch (Exception ex) {
|
||||
if (sentDlq == null) {
|
||||
KafkaStreamsDlqDispatch.this.logger
|
||||
.error("Error sending to DLQ " + sb.toString(), ex);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
|
||||
KafkaBinderConfigurationProperties configurationProperties) {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ProducerConfig.RETRIES_CONFIG, 0);
|
||||
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
|
||||
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
|
||||
Map<String, Object> mergedConfig = configurationProperties
|
||||
.mergedProducerConfiguration();
|
||||
if (!ObjectUtils.isEmpty(mergedConfig)) {
|
||||
props.putAll(mergedConfig);
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
|
||||
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
configurationProperties.getKafkaConnectionString());
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
|
||||
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
|
||||
String.valueOf(producerProperties.getExtension().getBufferSize()));
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
|
||||
props.put(ProducerConfig.LINGER_MS_CONFIG,
|
||||
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
|
||||
}
|
||||
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
|
||||
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
|
||||
producerProperties.getExtension().getCompressionType().toString());
|
||||
}
|
||||
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
|
||||
props.putAll(producerProperties.getExtension().getConfiguration());
|
||||
}
|
||||
// Always send as byte[] on dlq (the same byte[] that the consumer received)
|
||||
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
|
||||
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
ByteArraySerializer.class);
|
||||
|
||||
return new DefaultKafkaProducerFactory<>(props);
|
||||
}
|
||||
|
||||
private String toDisplayString(String original) {
|
||||
if (original.length() <= 50) {
|
||||
return original;
|
||||
}
|
||||
return original.substring(0, 50) + "...";
|
||||
}
|
||||
|
||||
}
|
||||
@@ -35,6 +35,7 @@ import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.StreamsBuilder;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.Topology;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
|
||||
@@ -51,9 +52,11 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
|
||||
import org.springframework.cloud.stream.config.BindingProperties;
|
||||
import org.springframework.cloud.stream.config.BindingServiceProperties;
|
||||
import org.springframework.cloud.stream.function.FunctionConstants;
|
||||
import org.springframework.cloud.stream.function.StreamFunctionProperties;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.util.Assert;
|
||||
import org.springframework.util.CollectionUtils;
|
||||
@@ -77,6 +80,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
private BeanFactory beanFactory;
|
||||
private StreamFunctionProperties streamFunctionProperties;
|
||||
private KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties;
|
||||
StreamsBuilderFactoryBeanCustomizer customizer;
|
||||
|
||||
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
|
||||
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
|
||||
@@ -85,7 +89,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
|
||||
CleanupConfig cleanupConfig,
|
||||
StreamFunctionProperties streamFunctionProperties,
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
StreamsBuilderFactoryBeanCustomizer customizer) {
|
||||
super(bindingServiceProperties, kafkaStreamsBindingInformationCatalogue, kafkaStreamsExtendedBindingProperties,
|
||||
keyValueSerdeResolver, cleanupConfig);
|
||||
this.bindingServiceProperties = bindingServiceProperties;
|
||||
@@ -95,6 +100,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
|
||||
this.streamFunctionProperties = streamFunctionProperties;
|
||||
this.kafkaStreamsBinderConfigurationProperties = kafkaStreamsBinderConfigurationProperties;
|
||||
this.customizer = customizer;
|
||||
}
|
||||
|
||||
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType,
|
||||
@@ -259,7 +265,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
}
|
||||
|
||||
private List<String> getOutputBindings(String functionName, int outputs) {
|
||||
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings().get(functionName);
|
||||
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings(functionName);
|
||||
List<String> outputBindingNames = new ArrayList<>();
|
||||
if (!CollectionUtils.isEmpty(outputBindings)) {
|
||||
outputBindingNames.addAll(outputBindings);
|
||||
@@ -267,7 +273,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
}
|
||||
else {
|
||||
for (int i = 0; i < outputs; i++) {
|
||||
outputBindingNames.add(String.format("%s_%s_%d", functionName, KafkaStreamsBindableProxyFactory.DEFAULT_OUTPUT_SUFFIX, i));
|
||||
outputBindingNames.add(String.format("%s-%s-%d", functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX, i));
|
||||
}
|
||||
}
|
||||
return outputBindingNames;
|
||||
@@ -288,15 +294,18 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
//Retrieve the StreamsConfig created for this method if available.
|
||||
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
|
||||
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext, input, kafkaStreamsBinderConfigurationProperties);
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext,
|
||||
input, kafkaStreamsBinderConfigurationProperties, customizer);
|
||||
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderFactoryBean);
|
||||
}
|
||||
try {
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
|
||||
this.methodStreamsBuilderFactoryBeanMap.get(functionName);
|
||||
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
|
||||
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
KafkaStreamsConsumerProperties extendedConsumerProperties =
|
||||
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
|
||||
extendedConsumerProperties.setApplicationId(applicationId);
|
||||
//get state store spec
|
||||
|
||||
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties, stringResolvableTypeMap.get(input));
|
||||
@@ -307,13 +316,14 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
final Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(input, extendedConsumerProperties);
|
||||
|
||||
if (parameterType.isAssignableFrom(KStream.class)) {
|
||||
KStream<?, ?> stream = getKStream(input, bindingProperties, streamsBuilder, keySerde, valueSerde, autoOffsetReset);
|
||||
KStream<?, ?> stream = getKStream(input, bindingProperties, extendedConsumerProperties,
|
||||
streamsBuilder, keySerde, valueSerde, autoOffsetReset, i == 0);
|
||||
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper =
|
||||
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
//wrap the proxy created during the initial target type binding with real object (KStream)
|
||||
kStreamWrapper.wrap((KStream<Object, Object>) stream);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream) kStreamWrapper, keySerde);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
|
||||
|
||||
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
|
||||
@@ -321,7 +331,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
(stringResolvableTypeMap.get(input).getGeneric(1).getRawClass() != null)
|
||||
? (stringResolvableTypeMap.get(input).getGeneric(1).getRawClass()) : Object.class;
|
||||
if (this.kafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(
|
||||
(KStream) kStreamWrapper)) {
|
||||
(KStream<?, ?>) kStreamWrapper)) {
|
||||
arguments[i] = stream;
|
||||
}
|
||||
else {
|
||||
@@ -352,13 +362,6 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
return arguments;
|
||||
}
|
||||
|
||||
private KStream<?, ?> getkStream(String inboundName,
|
||||
BindingProperties bindingProperties,
|
||||
StreamsBuilder streamsBuilder,
|
||||
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
|
||||
return getKStream(inboundName, bindingProperties, streamsBuilder, keySerde, valueSerde, autoOffsetReset);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
|
||||
this.beanFactory = beanFactory;
|
||||
|
||||
@@ -22,6 +22,7 @@ import java.util.Map;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.common.header.Header;
|
||||
import org.apache.kafka.common.header.Headers;
|
||||
import org.apache.kafka.common.header.internals.RecordHeader;
|
||||
@@ -65,6 +66,8 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
|
||||
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
|
||||
|
||||
Exception[] failedWithDeserException = new Exception[1];
|
||||
|
||||
KafkaStreamsMessageConversionDelegate(
|
||||
CompositeMessageConverter compositeMessageConverter,
|
||||
SendToDlqAndContinue sendToDlqAndContinue,
|
||||
@@ -88,18 +91,20 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
MessageConverter messageConverter = this.compositeMessageConverter;
|
||||
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
|
||||
|
||||
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget.mapValues((v) -> {
|
||||
Message<?> message = v instanceof Message<?> ? (Message<?>) v
|
||||
: MessageBuilder.withPayload(v).build();
|
||||
Map<String, Object> headers = new HashMap<>(message.getHeaders());
|
||||
if (!StringUtils.isEmpty(contentType)) {
|
||||
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
|
||||
}
|
||||
MessageHeaders messageHeaders = new MessageHeaders(headers);
|
||||
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
|
||||
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
|
||||
return convertedMessage.getPayload();
|
||||
});
|
||||
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget
|
||||
.filter((k, v) -> v != null)
|
||||
.mapValues((v) -> {
|
||||
Message<?> message = v instanceof Message<?> ? (Message<?>) v
|
||||
: MessageBuilder.withPayload(v).build();
|
||||
Map<String, Object> headers = new HashMap<>(message.getHeaders());
|
||||
if (!StringUtils.isEmpty(contentType)) {
|
||||
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
|
||||
}
|
||||
MessageHeaders messageHeaders = new MessageHeaders(headers);
|
||||
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
|
||||
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
|
||||
return convertedMessage.getPayload();
|
||||
});
|
||||
|
||||
kStreamWithEnrichedHeaders.process(() -> new Processor() {
|
||||
|
||||
@@ -200,6 +205,7 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
"Deserialization has failed. This will be skipped from further processing.",
|
||||
e);
|
||||
// pass through
|
||||
failedWithDeserException[0] = e;
|
||||
}
|
||||
return isValidRecord;
|
||||
},
|
||||
@@ -207,7 +213,7 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
// in the first filter above.
|
||||
(k, v) -> true);
|
||||
// process errors from the second filter in the branch above.
|
||||
processErrorFromDeserialization(bindingTarget, branch[1]);
|
||||
processErrorFromDeserialization(bindingTarget, branch[1], failedWithDeserException);
|
||||
|
||||
// first branch above is the branch where the messages are converted, let it go
|
||||
// through further processing.
|
||||
@@ -264,7 +270,7 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
|
||||
@SuppressWarnings({ "unchecked", "rawtypes" })
|
||||
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget,
|
||||
KStream<?, ?> branch) {
|
||||
KStream<?, ?> branch, Exception[] exception) {
|
||||
branch.process(() -> new Processor() {
|
||||
ProcessorContext context;
|
||||
|
||||
@@ -279,7 +285,6 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
if (o2 != null) {
|
||||
if (KafkaStreamsMessageConversionDelegate.this.kstreamBindingInformationCatalogue
|
||||
.isDlqEnabled(bindingTarget)) {
|
||||
String destination = this.context.topic();
|
||||
if (o2 instanceof Message) {
|
||||
Message message = (Message) o2;
|
||||
|
||||
@@ -288,15 +293,17 @@ public class KafkaStreamsMessageConversionDelegate {
|
||||
Serializer keySerializer = keySerde.serializer();
|
||||
byte[] keyBytes = keySerializer.serialize(null, o);
|
||||
|
||||
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
|
||||
keyBytes, message.getPayload());
|
||||
|
||||
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
|
||||
.sendToDlq(destination, keyBytes,
|
||||
(byte[]) message.getPayload(),
|
||||
this.context.partition());
|
||||
.sendToDlq(consumerRecord, exception[0]);
|
||||
}
|
||||
else {
|
||||
ConsumerRecord consumerRecord = new ConsumerRecord(this.context.topic(), this.context.partition(), this.context.offset(),
|
||||
o, o2);
|
||||
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
|
||||
.sendToDlq(destination, (byte[]) o, (byte[]) o2,
|
||||
this.context.partition());
|
||||
.sendToDlq(consumerRecord, exception[0]);
|
||||
}
|
||||
}
|
||||
else if (KafkaStreamsMessageConversionDelegate.this.kstreamBinderConfigurationProperties
|
||||
|
||||
@@ -16,11 +16,15 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
|
||||
/**
|
||||
* An internal registry for holding {@KafkaStreams} objects maintained through
|
||||
* {@link StreamsBuilderFactoryManager}.
|
||||
@@ -29,6 +33,8 @@ import org.apache.kafka.streams.KafkaStreams;
|
||||
*/
|
||||
class KafkaStreamsRegistry {
|
||||
|
||||
private Map<KafkaStreams, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanMap = new HashMap<>();
|
||||
|
||||
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
|
||||
|
||||
Set<KafkaStreams> getKafkaStreams() {
|
||||
@@ -37,10 +43,21 @@ class KafkaStreamsRegistry {
|
||||
|
||||
/**
|
||||
* Register the {@link KafkaStreams} object created in the application.
|
||||
* @param kafkaStreams {@link KafkaStreams} object created in the application
|
||||
* @param streamsBuilderFactoryBean {@link StreamsBuilderFactoryBean}
|
||||
*/
|
||||
void registerKafkaStreams(KafkaStreams kafkaStreams) {
|
||||
void registerKafkaStreams(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
final KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
this.kafkaStreams.add(kafkaStreams);
|
||||
this.streamsBuilderFactoryBeanMap.put(kafkaStreams, streamsBuilderFactoryBean);
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param kafkaStreams {@link KafkaStreams} object
|
||||
* @return Corresponding {@link StreamsBuilderFactoryBean}.
|
||||
*/
|
||||
StreamsBuilderFactoryBean streamBuilderFactoryBean(KafkaStreams kafkaStreams) {
|
||||
return this.streamsBuilderFactoryBeanMap.get(kafkaStreams);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -54,6 +54,7 @@ import org.springframework.core.MethodParameter;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.core.annotation.AnnotationUtils;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
import org.springframework.util.Assert;
|
||||
@@ -98,6 +99,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
|
||||
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
|
||||
|
||||
StreamsBuilderFactoryBeanCustomizer customizer;
|
||||
|
||||
KafkaStreamsStreamListenerSetupMethodOrchestrator(
|
||||
BindingServiceProperties bindingServiceProperties,
|
||||
KafkaStreamsExtendedBindingProperties extendedBindingProperties,
|
||||
@@ -105,7 +108,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
KafkaStreamsBindingInformationCatalogue bindingInformationCatalogue,
|
||||
StreamListenerParameterAdapter streamListenerParameterAdapter,
|
||||
Collection<StreamListenerResultAdapter> listenerResultAdapters,
|
||||
CleanupConfig cleanupConfig) {
|
||||
CleanupConfig cleanupConfig,
|
||||
StreamsBuilderFactoryBeanCustomizer customizer) {
|
||||
super(bindingServiceProperties, bindingInformationCatalogue, extendedBindingProperties, keyValueSerdeResolver, cleanupConfig);
|
||||
this.bindingServiceProperties = bindingServiceProperties;
|
||||
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
|
||||
@@ -113,6 +117,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
|
||||
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
|
||||
this.streamListenerResultAdapters = listenerResultAdapters;
|
||||
this.customizer = customizer;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -244,15 +249,17 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(method.getDeclaringClass().getSimpleName() + "-" + method.getName(),
|
||||
applicationContext,
|
||||
inboundName, null);
|
||||
inboundName, null, customizer);
|
||||
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderFactoryBean);
|
||||
}
|
||||
try {
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
|
||||
.get(method);
|
||||
StreamsBuilder streamsBuilder = streamsBuilderFactoryBean.getObject();
|
||||
final String applicationId = streamsBuilderFactoryBean.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG);
|
||||
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
|
||||
.getExtendedConsumerProperties(inboundName);
|
||||
extendedConsumerProperties.setApplicationId(applicationId);
|
||||
// get state store spec
|
||||
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
|
||||
|
||||
@@ -268,8 +275,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
|
||||
if (parameterType.isAssignableFrom(KStream.class)) {
|
||||
KStream<?, ?> stream = getkStream(inboundName, spec,
|
||||
bindingProperties, streamsBuilder, keySerde, valueSerde,
|
||||
autoOffsetReset);
|
||||
bindingProperties, extendedConsumerProperties, streamsBuilder, keySerde, valueSerde,
|
||||
autoOffsetReset, parameterIndex == 0);
|
||||
KStreamBoundElementFactory.KStreamWrapper kStreamWrapper = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
// wrap the proxy created during the initial target type binding
|
||||
// with real object (KStream)
|
||||
@@ -359,9 +366,10 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
|
||||
private KStream<?, ?> getkStream(String inboundName,
|
||||
KafkaStreamsStateStoreProperties storeSpec,
|
||||
BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
|
||||
BindingProperties bindingProperties,
|
||||
KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, StreamsBuilder streamsBuilder,
|
||||
Serde<?> keySerde, Serde<?> valueSerde,
|
||||
Topology.AutoOffsetReset autoOffsetReset) {
|
||||
Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
|
||||
if (storeSpec != null) {
|
||||
StoreBuilder storeBuilder = buildStateStore(storeSpec);
|
||||
streamsBuilder.addStateStore(storeBuilder);
|
||||
@@ -369,7 +377,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
LOG.info("state store " + storeBuilder.name() + " added to topology");
|
||||
}
|
||||
}
|
||||
return getKStream(inboundName, bindingProperties, streamsBuilder, keySerde, valueSerde, autoOffsetReset);
|
||||
return getKStream(inboundName, bindingProperties, kafkaStreamsConsumerProperties, streamsBuilder,
|
||||
keySerde, valueSerde, autoOffsetReset, firstBuild);
|
||||
}
|
||||
|
||||
private void validateStreamListenerMethod(StreamListener streamListener,
|
||||
|
||||
@@ -16,109 +16,48 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.lang.reflect.Field;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.consumer.KafkaConsumer;
|
||||
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
|
||||
import org.apache.kafka.common.TopicPartition;
|
||||
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
|
||||
import org.apache.kafka.streams.processor.ProcessorContext;
|
||||
import org.apache.kafka.streams.processor.internals.ProcessorContextImpl;
|
||||
import org.apache.kafka.streams.processor.internals.StreamTask;
|
||||
|
||||
import org.springframework.util.ReflectionUtils;
|
||||
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
|
||||
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
|
||||
|
||||
/**
|
||||
* Custom implementation for {@link DeserializationExceptionHandler} that sends the
|
||||
* records in error to a DLQ topic, then continue stream processing on new records.
|
||||
* Custom implementation for {@link ConsumerRecordRecoverer} that keeps a collection of
|
||||
* recoverer objects per input topics. These topics might be per input binding or multiplexed
|
||||
* topics in a single binding.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @since 2.0.0
|
||||
*/
|
||||
public class SendToDlqAndContinue implements DeserializationExceptionHandler {
|
||||
|
||||
/**
|
||||
* Key used for DLQ dispatchers.
|
||||
*/
|
||||
public static final String KAFKA_STREAMS_DLQ_DISPATCHERS = "spring.cloud.stream.kafka.streams.dlq.dispatchers";
|
||||
public class SendToDlqAndContinue implements ConsumerRecordRecoverer {
|
||||
|
||||
/**
|
||||
* DLQ dispatcher per topic in the application context. The key here is not the actual
|
||||
* DLQ topic but the incoming topic that caused the error.
|
||||
*/
|
||||
private Map<String, KafkaStreamsDlqDispatch> dlqDispatchers = new HashMap<>();
|
||||
private Map<String, DeadLetterPublishingRecoverer> dlqDispatchers = new HashMap<>();
|
||||
|
||||
/**
|
||||
* For a given topic, send the key/value record to DLQ topic.
|
||||
* @param topic incoming topic that caused the error
|
||||
* @param key to send
|
||||
* @param value to send
|
||||
* @param partition for the topic where this record should be sent
|
||||
*
|
||||
* @param consumerRecord consumer record
|
||||
* @param exception exception
|
||||
*/
|
||||
public void sendToDlq(String topic, byte[] key, byte[] value, int partition) {
|
||||
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers.get(topic);
|
||||
kafkaStreamsDlqDispatch.sendToDlq(key, value, partition);
|
||||
}
|
||||
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public DeserializationHandlerResponse handle(ProcessorContext context,
|
||||
ConsumerRecord<byte[], byte[]> record, Exception exception) {
|
||||
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = this.dlqDispatchers
|
||||
.get(record.topic());
|
||||
kafkaStreamsDlqDispatch.sendToDlq(record.key(), record.value(),
|
||||
record.partition());
|
||||
context.commit();
|
||||
|
||||
// The following conditional block should be reconsidered when we have a solution
|
||||
// for this SO problem:
|
||||
// https://stackoverflow.com/questions/48470899/kafka-streams-deserialization-handler
|
||||
// Currently it seems like when deserialization error happens, there is no commits
|
||||
// happening and the
|
||||
// following code will use reflection to get access to the underlying
|
||||
// KafkaConsumer.
|
||||
// It works with Kafka 1.0.0, but there is no guarantee it will work in future
|
||||
// versions of kafka as
|
||||
// we access private fields by name using reflection, but it is a temporary fix.
|
||||
if (context instanceof ProcessorContextImpl) {
|
||||
ProcessorContextImpl processorContextImpl = (ProcessorContextImpl) context;
|
||||
Field task = ReflectionUtils.findField(ProcessorContextImpl.class, "task");
|
||||
ReflectionUtils.makeAccessible(task);
|
||||
Object taskField = ReflectionUtils.getField(task, processorContextImpl);
|
||||
|
||||
if (taskField.getClass().isAssignableFrom(StreamTask.class)) {
|
||||
StreamTask streamTask = (StreamTask) taskField;
|
||||
Field consumer = ReflectionUtils.findField(StreamTask.class, "consumer");
|
||||
ReflectionUtils.makeAccessible(consumer);
|
||||
Object kafkaConsumerField = ReflectionUtils.getField(consumer,
|
||||
streamTask);
|
||||
if (kafkaConsumerField.getClass().isAssignableFrom(KafkaConsumer.class)) {
|
||||
KafkaConsumer kafkaConsumer = (KafkaConsumer) kafkaConsumerField;
|
||||
final Map<TopicPartition, OffsetAndMetadata> consumedOffsetsAndMetadata = new HashMap<>();
|
||||
TopicPartition tp = new TopicPartition(record.topic(),
|
||||
record.partition());
|
||||
OffsetAndMetadata oam = new OffsetAndMetadata(record.offset() + 1);
|
||||
consumedOffsetsAndMetadata.put(tp, oam);
|
||||
kafkaConsumer.commitSync(consumedOffsetsAndMetadata);
|
||||
}
|
||||
}
|
||||
}
|
||||
return DeserializationHandlerResponse.CONTINUE;
|
||||
}
|
||||
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void configure(Map<String, ?> configs) {
|
||||
this.dlqDispatchers = (Map<String, KafkaStreamsDlqDispatch>) configs
|
||||
.get(KAFKA_STREAMS_DLQ_DISPATCHERS);
|
||||
public void sendToDlq(ConsumerRecord<?, ?> consumerRecord, Exception exception) {
|
||||
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch = this.dlqDispatchers.get(consumerRecord.topic());
|
||||
kafkaStreamsDlqDispatch.accept(consumerRecord, exception);
|
||||
}
|
||||
|
||||
void addKStreamDlqDispatch(String topic,
|
||||
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch) {
|
||||
DeadLetterPublishingRecoverer kafkaStreamsDlqDispatch) {
|
||||
this.dlqDispatchers.put(topic, kafkaStreamsDlqDispatch);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void accept(ConsumerRecord<?, ?> consumerRecord, Exception e) {
|
||||
this.dlqDispatchers.get(consumerRecord.topic()).accept(consumerRecord, e);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -40,14 +40,15 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
|
||||
|
||||
private final KafkaStreamsRegistry kafkaStreamsRegistry;
|
||||
private final KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics;
|
||||
|
||||
private volatile boolean running;
|
||||
|
||||
StreamsBuilderFactoryManager(
|
||||
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry) {
|
||||
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry, KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics) {
|
||||
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
|
||||
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
|
||||
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -71,8 +72,10 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
.getStreamsBuilderFactoryBeans();
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
streamsBuilderFactoryBean.start();
|
||||
this.kafkaStreamsRegistry.registerKafkaStreams(
|
||||
streamsBuilderFactoryBean.getKafkaStreams());
|
||||
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
}
|
||||
if (this.kafkaStreamsBinderMetrics != null) {
|
||||
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
|
||||
}
|
||||
this.running = true;
|
||||
}
|
||||
|
||||
@@ -27,8 +27,6 @@ import java.util.function.BiFunction;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
@@ -41,8 +39,8 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
|
||||
import org.springframework.beans.factory.support.RootBeanDefinition;
|
||||
import org.springframework.cloud.stream.binding.AbstractBindableProxyFactory;
|
||||
import org.springframework.cloud.stream.binding.BindableProxyFactory;
|
||||
import org.springframework.cloud.stream.binding.BoundTargetHolder;
|
||||
import org.springframework.cloud.stream.function.FunctionConstants;
|
||||
import org.springframework.cloud.stream.function.StreamFunctionProperties;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.util.Assert;
|
||||
@@ -69,15 +67,6 @@ import org.springframework.util.CollectionUtils;
|
||||
*/
|
||||
public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFactory implements InitializingBean, BeanFactoryAware {
|
||||
|
||||
/**
|
||||
* Default output binding name. Output binding may occur later on in the function invoker (outside of this class),
|
||||
* thus making this field part of the API.
|
||||
*/
|
||||
public static final String DEFAULT_OUTPUT_SUFFIX = "out";
|
||||
private static final String DEFAULT_INPUT_SUFFIX = "in";
|
||||
|
||||
private static Log log = LogFactory.getLog(BindableProxyFactory.class);
|
||||
|
||||
@Autowired
|
||||
private StreamFunctionProperties streamFunctionProperties;
|
||||
|
||||
@@ -129,7 +118,7 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
// if the type is array, we need to do a late binding as we don't know the number of
|
||||
// output bindings at this point in the flow.
|
||||
|
||||
List<String> outputBindings = streamFunctionProperties.getOutputBindings().get(this.functionName);
|
||||
List<String> outputBindings = streamFunctionProperties.getOutputBindings(this.functionName);
|
||||
String outputBinding = null;
|
||||
|
||||
if (!CollectionUtils.isEmpty(outputBindings)) {
|
||||
@@ -140,7 +129,7 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
|
||||
}
|
||||
else {
|
||||
outputBinding = String.format("%s_%s", this.functionName, DEFAULT_OUTPUT_SUFFIX);
|
||||
outputBinding = String.format("%s-%s-0", this.functionName, FunctionConstants.DEFAULT_OUTPUT_SUFFIX);
|
||||
}
|
||||
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
|
||||
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
|
||||
@@ -168,7 +157,7 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
*/
|
||||
private List<String> buildInputBindings() {
|
||||
List<String> inputs = new ArrayList<>();
|
||||
List<String> inputBindings = streamFunctionProperties.getInputBindings().get(this.functionName);
|
||||
List<String> inputBindings = streamFunctionProperties.getInputBindings(this.functionName);
|
||||
if (!CollectionUtils.isEmpty(inputBindings)) {
|
||||
inputs.addAll(inputBindings);
|
||||
return inputs;
|
||||
@@ -176,17 +165,12 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
int numberOfInputs = this.type.getRawClass() != null &&
|
||||
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
|
||||
this.type.getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
|
||||
if (numberOfInputs == 1) {
|
||||
inputs.add(String.format("%s_%s", this.functionName, DEFAULT_INPUT_SUFFIX));
|
||||
return inputs;
|
||||
}
|
||||
else {
|
||||
int i = 0;
|
||||
while (i < numberOfInputs) {
|
||||
inputs.add(String.format("%s_%s_%d", this.functionName, DEFAULT_INPUT_SUFFIX, i++));
|
||||
}
|
||||
return inputs;
|
||||
int i = 0;
|
||||
while (i < numberOfInputs) {
|
||||
inputs.add(String.format("%s-%s-%d", this.functionName, FunctionConstants.DEFAULT_INPUT_SUFFIX, i++));
|
||||
}
|
||||
return inputs;
|
||||
|
||||
}
|
||||
|
||||
private int getNumberOfInputs() {
|
||||
|
||||
@@ -16,13 +16,8 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
|
||||
import org.springframework.beans.factory.support.RootBeanDefinition;
|
||||
import org.springframework.boot.autoconfigure.AutoConfigureBefore;
|
||||
import org.springframework.boot.context.properties.EnableConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
|
||||
import org.springframework.cloud.stream.config.BinderFactoryAutoConfiguration;
|
||||
import org.springframework.cloud.stream.function.StreamFunctionProperties;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Conditional;
|
||||
@@ -34,7 +29,6 @@ import org.springframework.context.annotation.Configuration;
|
||||
*/
|
||||
@Configuration
|
||||
@EnableConfigurationProperties(StreamFunctionProperties.class)
|
||||
@AutoConfigureBefore(BinderFactoryAutoConfiguration.class)
|
||||
public class KafkaStreamsFunctionAutoConfiguration {
|
||||
|
||||
@Bean
|
||||
@@ -49,25 +43,7 @@ public class KafkaStreamsFunctionAutoConfiguration {
|
||||
|
||||
@Bean
|
||||
@Conditional(FunctionDetectorCondition.class)
|
||||
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor() {
|
||||
return new KafkaStreamsFunctionBeanPostProcessor();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@Conditional(FunctionDetectorCondition.class)
|
||||
public static BeanFactoryPostProcessor implicitFunctionKafkaStreamsBinder(KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor) {
|
||||
return beanFactory -> {
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
|
||||
for (String s : kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes().keySet()) {
|
||||
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
|
||||
KafkaStreamsBindableProxyFactory.class);
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes().get(s));
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(s);
|
||||
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
|
||||
}
|
||||
};
|
||||
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
|
||||
return new KafkaStreamsFunctionBeanPostProcessor(streamFunctionProperties);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.TreeMap;
|
||||
@@ -25,10 +26,14 @@ import java.util.function.BiConsumer;
|
||||
import java.util.function.BiFunction;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.stream.Stream;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
|
||||
import org.springframework.beans.BeansException;
|
||||
import org.springframework.beans.factory.BeanFactory;
|
||||
@@ -36,6 +41,9 @@ import org.springframework.beans.factory.BeanFactoryAware;
|
||||
import org.springframework.beans.factory.InitializingBean;
|
||||
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
|
||||
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
|
||||
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
|
||||
import org.springframework.beans.factory.support.RootBeanDefinition;
|
||||
import org.springframework.cloud.stream.function.StreamFunctionProperties;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.util.ClassUtils;
|
||||
|
||||
@@ -49,9 +57,18 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionBeanPostProcessor.class);
|
||||
|
||||
private static final String[] EXCLUDE_FUNCTIONS = new String[]{"functionRouter", "sendToDlqAndContinue"};
|
||||
|
||||
private ConfigurableListableBeanFactory beanFactory;
|
||||
private boolean onlySingleFunction;
|
||||
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
|
||||
|
||||
private final StreamFunctionProperties streamFunctionProperties;
|
||||
|
||||
public KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
|
||||
this.streamFunctionProperties = streamFunctionProperties;
|
||||
}
|
||||
|
||||
public Map<String, ResolvableType> getResolvableTypes() {
|
||||
return this.resolvableTypeMap;
|
||||
}
|
||||
@@ -63,10 +80,26 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
|
||||
String[] biConsumerNames = this.beanFactory.getBeanNamesForType(BiConsumer.class);
|
||||
|
||||
Stream.concat(
|
||||
final Stream<String> concat = Stream.concat(
|
||||
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)),
|
||||
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)))
|
||||
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
|
||||
final List<String> collect = concat.collect(Collectors.toList());
|
||||
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
|
||||
onlySingleFunction = collect.size() == 1;
|
||||
collect.stream()
|
||||
.forEach(this::extractResolvableTypes);
|
||||
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
|
||||
for (String s : getResolvableTypes().keySet()) {
|
||||
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
|
||||
KafkaStreamsBindableProxyFactory.class);
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(getResolvableTypes().get(s));
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(s);
|
||||
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
|
||||
}
|
||||
}
|
||||
|
||||
private void extractResolvableTypes(String key) {
|
||||
@@ -80,11 +113,25 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
if (kafkaStreamMethod.isPresent()) {
|
||||
Method method = kafkaStreamMethod.get();
|
||||
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
|
||||
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
|
||||
if (onlySingleFunction) {
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
}
|
||||
else {
|
||||
final String definition = streamFunctionProperties.getDefinition();
|
||||
if (definition == null) {
|
||||
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
|
||||
}
|
||||
else if (definition.contains(key)) {
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
LOG.error("Function not found: " + key, e);
|
||||
LOG.error("Function activation issues while mapping the function: " + key, e);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
/*
|
||||
* Copyright 2017-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.properties;
|
||||
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
|
||||
/**
|
||||
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications.
|
||||
* This class provides convenient ways to access the commonly used kafka stream properties
|
||||
* from the user application. For example, windowing operations are common use cases in
|
||||
* stream processing and one can provide window specific properties at runtime and use
|
||||
* those properties in the applications using this class.
|
||||
*
|
||||
* @deprecated The properties exposed by this class can be used directly on Kafka Streams API in the application.
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
|
||||
@Deprecated
|
||||
public class KafkaStreamsApplicationSupportProperties {
|
||||
|
||||
private TimeWindow timeWindow;
|
||||
|
||||
public TimeWindow getTimeWindow() {
|
||||
return this.timeWindow;
|
||||
}
|
||||
|
||||
public void setTimeWindow(TimeWindow timeWindow) {
|
||||
this.timeWindow = timeWindow;
|
||||
}
|
||||
|
||||
/**
|
||||
* Properties required by time windows.
|
||||
*/
|
||||
public static class TimeWindow {
|
||||
|
||||
private int length;
|
||||
|
||||
private int advanceBy;
|
||||
|
||||
public int getLength() {
|
||||
return this.length;
|
||||
}
|
||||
|
||||
public void setLength(int length) {
|
||||
this.length = length;
|
||||
}
|
||||
|
||||
public int getAdvanceBy() {
|
||||
return this.advanceBy;
|
||||
}
|
||||
|
||||
public void setAdvanceBy(int advanceBy) {
|
||||
this.advanceBy = advanceBy;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
@@ -57,10 +57,18 @@ public class KafkaStreamsBinderConfigurationProperties
|
||||
|
||||
private String applicationId;
|
||||
|
||||
private Map<String, String> functions = new HashMap<>();
|
||||
|
||||
private StateStoreRetry stateStoreRetry = new StateStoreRetry();
|
||||
|
||||
private Map<String, Functions> functions = new HashMap<>();
|
||||
|
||||
public Map<String, Functions> getFunctions() {
|
||||
return functions;
|
||||
}
|
||||
|
||||
public void setFunctions(Map<String, Functions> functions) {
|
||||
this.functions = functions;
|
||||
}
|
||||
|
||||
public StateStoreRetry getStateStoreRetry() {
|
||||
return stateStoreRetry;
|
||||
}
|
||||
@@ -69,14 +77,6 @@ public class KafkaStreamsBinderConfigurationProperties
|
||||
this.stateStoreRetry = stateStoreRetry;
|
||||
}
|
||||
|
||||
public Map<String, String> getFunctions() {
|
||||
return functions;
|
||||
}
|
||||
|
||||
public void setFunctions(Map<String, String> functions) {
|
||||
this.functions = functions;
|
||||
}
|
||||
|
||||
public String getApplicationId() {
|
||||
return this.applicationId;
|
||||
}
|
||||
@@ -125,4 +125,33 @@ public class KafkaStreamsBinderConfigurationProperties
|
||||
}
|
||||
}
|
||||
|
||||
public static class Functions {
|
||||
|
||||
/**
|
||||
* Function specific application id.
|
||||
*/
|
||||
private String applicationId;
|
||||
|
||||
/**
|
||||
* Funcion specific configuraiton to use.
|
||||
*/
|
||||
private Map<String, String> configuration;
|
||||
|
||||
public String getApplicationId() {
|
||||
return applicationId;
|
||||
}
|
||||
|
||||
public void setApplicationId(String applicationId) {
|
||||
this.applicationId = applicationId;
|
||||
}
|
||||
|
||||
public Map<String, String> getConfiguration() {
|
||||
return configuration;
|
||||
}
|
||||
|
||||
public void setConfiguration(Map<String, String> configuration) {
|
||||
this.configuration = configuration;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -43,6 +43,11 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
|
||||
*/
|
||||
private String materializedAs;
|
||||
|
||||
/**
|
||||
* {@link org.apache.kafka.streams.processor.TimestampExtractor} bean name to use for this consumer.
|
||||
*/
|
||||
private String timestampExtractorBeanName;
|
||||
|
||||
public String getApplicationId() {
|
||||
return this.applicationId;
|
||||
}
|
||||
@@ -75,4 +80,11 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
|
||||
this.materializedAs = materializedAs;
|
||||
}
|
||||
|
||||
public String getTimestampExtractorBeanName() {
|
||||
return timestampExtractorBeanName;
|
||||
}
|
||||
|
||||
public void setTimestampExtractorBeanName(String timestampExtractorBeanName) {
|
||||
this.timestampExtractorBeanName = timestampExtractorBeanName;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -36,6 +36,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
|
||||
*/
|
||||
private String valueSerde;
|
||||
|
||||
/**
|
||||
* {@link org.apache.kafka.streams.processor.StreamPartitioner} to be used on Kafka Streams producer.
|
||||
*/
|
||||
private String streamPartitionerBeanName;
|
||||
|
||||
public String getKeySerde() {
|
||||
return this.keySerde;
|
||||
}
|
||||
@@ -52,4 +57,11 @@ public class KafkaStreamsProducerProperties extends KafkaProducerProperties {
|
||||
this.valueSerde = valueSerde;
|
||||
}
|
||||
|
||||
public String getStreamPartitionerBeanName() {
|
||||
return this.streamPartitionerBeanName;
|
||||
}
|
||||
|
||||
public void setStreamPartitionerBeanName(String streamPartitionerBeanName) {
|
||||
this.streamPartitionerBeanName = streamPartitionerBeanName;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -107,11 +107,12 @@ public class CollectionSerde<E> implements Serde<Collection<E>> {
|
||||
*/
|
||||
public CollectionSerde(Class<?> targetTypeForJsonSerde, Class<?> collectionsClass) {
|
||||
this.collectionClass = collectionsClass;
|
||||
JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde);
|
||||
try (JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde)) {
|
||||
|
||||
this.inner = Serdes.serdeFrom(
|
||||
new CollectionSerializer<>(jsonSerde.serializer()),
|
||||
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
|
||||
this.inner = Serdes.serdeFrom(
|
||||
new CollectionSerializer<>(jsonSerde.serializer()),
|
||||
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -204,8 +205,10 @@ public class CollectionSerde<E> implements Serde<Collection<E>> {
|
||||
final int records = dataInputStream.readInt();
|
||||
for (int i = 0; i < records; i++) {
|
||||
final byte[] valueBytes = new byte[dataInputStream.readInt()];
|
||||
dataInputStream.read(valueBytes);
|
||||
collection.add(valueDeserializer.deserialize(topic, valueBytes));
|
||||
final int read = dataInputStream.read(valueBytes);
|
||||
if (read != -1) {
|
||||
collection.add(valueDeserializer.deserialize(topic, valueBytes));
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (IOException e) {
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
|
||||
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration,\
|
||||
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration,\
|
||||
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration
|
||||
|
||||
|
||||
@@ -48,6 +48,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaSt
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -94,7 +95,9 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
@Test
|
||||
public void testStateStoreRetrievalRetry() {
|
||||
|
||||
KafkaStreams mock = Mockito.mock(KafkaStreams.class);
|
||||
StreamsBuilderFactoryBean mock = Mockito.mock(StreamsBuilderFactoryBean.class);
|
||||
KafkaStreams mockKafkaStreams = Mockito.mock(KafkaStreams.class);
|
||||
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
|
||||
kafkaStreamsRegistry.registerKafkaStreams(mock);
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
|
||||
@@ -111,7 +114,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
|
||||
}
|
||||
|
||||
Mockito.verify(mock, times(3)).store("foo", storeType);
|
||||
Mockito.verify(mockKafkaStreams, times(3)).store("foo", storeType);
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -140,8 +143,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
}
|
||||
}
|
||||
|
||||
private void receiveAndValidateFoo(ConfigurableApplicationContext context)
|
||||
throws Exception {
|
||||
private void receiveAndValidateFoo(ConfigurableApplicationContext context) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
|
||||
@@ -79,21 +79,21 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.inputBindings.process=input",
|
||||
"--spring.cloud.stream.function.outputBindings.process=output1,output2,output3",
|
||||
"--spring.cloud.stream.function.bindings.process-in-0=input",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.function.bindings.process-out-0=output1",
|
||||
"--spring.cloud.stream.bindings.output1.destination=counts",
|
||||
"--spring.cloud.stream.function.bindings.process-out-1=output2",
|
||||
"--spring.cloud.stream.bindings.output2.destination=foo",
|
||||
"--spring.cloud.stream.function.bindings.process-out-2=output3",
|
||||
"--spring.cloud.stream.bindings.output3.destination=bar",
|
||||
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId" +
|
||||
"--spring.cloud.stream.kafka.streams.binder.applicationId" +
|
||||
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString());
|
||||
try {
|
||||
|
||||
@@ -21,6 +21,7 @@ import java.util.Date;
|
||||
import java.util.Map;
|
||||
import java.util.function.Function;
|
||||
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
@@ -30,14 +31,17 @@ import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
import org.apache.kafka.streams.kstream.TimeWindows;
|
||||
import org.apache.kafka.streams.processor.StreamPartitioner;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
@@ -53,7 +57,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts", "counts-1");
|
||||
"counts", "counts-1", "counts-2");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
|
||||
|
||||
@@ -64,9 +68,10 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1");
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts", "counts-1", "counts-2");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
@@ -80,12 +85,10 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--spring.cloud.stream.function.inputBindings.process=input",
|
||||
"--spring.cloud.stream.function.outputBindings.process=output",
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKstreamWordCountFunction",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
@@ -94,6 +97,9 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words", "counts");
|
||||
final MeterRegistry meterRegistry = context.getBean(MeterRegistry.class);
|
||||
Thread.sleep(100);
|
||||
assertThat(meterRegistry.get("stream.metrics.commit.total").gauge().value()).isEqualTo(1.0);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -103,12 +109,10 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--spring.cloud.stream.function.inputBindings.process=input",
|
||||
"--spring.cloud.stream.function.outputBindings.process=output",
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words-1",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts-1",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-1",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
@@ -119,6 +123,45 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKstreamWordCountFunctionWithCustomProducerStreamPartitioner() throws Exception {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words-2",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=counts-2",
|
||||
"--spring.cloud.stream.bindings.process-out-0.producer.partitionCount=2",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-out-0.producer.streamPartitionerBeanName" +
|
||||
"=streamPartitioner",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("words-2");
|
||||
template.sendDefault("foo");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
|
||||
assertThat(cr.value().contains("\"word\":\"foo\",\"count\":1")).isTrue();
|
||||
assertThat(cr.partition() == 0) .isTrue();
|
||||
template.sendDefault("bar");
|
||||
cr = KafkaTestUtils.getSingleRecord(consumer, "counts-2");
|
||||
assertThat(cr.value().contains("\"word\":\"bar\",\"count\":1")).isTrue();
|
||||
assertThat(cr.partition() == 1) .isTrue();
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void receiveAndValidate(String in, String out) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
@@ -187,8 +230,11 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
@EnableAutoConfiguration
|
||||
public static class WordCountProcessorApplication {
|
||||
|
||||
@Autowired
|
||||
InteractiveQueryService interactiveQueryService;
|
||||
|
||||
@Bean
|
||||
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
|
||||
public Function<KStream<Object, String>, KStream<String, WordCount>> process() {
|
||||
|
||||
return input -> input
|
||||
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
@@ -197,8 +243,13 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
.windowedBy(TimeWindows.of(5000))
|
||||
.count(Materialized.as("foo-WordCounts"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value,
|
||||
.map((key, value) -> new KeyValue<>(key.key(), new WordCount(key.key(), value,
|
||||
new Date(key.window().start()), new Date(key.window().end()))));
|
||||
}
|
||||
|
||||
@Bean
|
||||
public StreamPartitioner<String, WordCount> streamPartitioner() {
|
||||
return (t, k, v, n) -> k.equals("foo") ? 0 : 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
@@ -58,8 +59,8 @@ public class KafkaStreamsFunctionStateStoreTests {
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process_in.destination=words",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testKafkaStreamsFuncionWithMultipleStateStores",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words",
|
||||
"--spring.cloud.stream.kafka.streams.binder.application-id=testKafkaStreamsFuncionWithMultipleStateStores",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
@@ -104,9 +105,9 @@ public class KafkaStreamsFunctionStateStoreTests {
|
||||
boolean processed;
|
||||
|
||||
@Bean
|
||||
public java.util.function.Consumer<KStream<Object, String>> process() {
|
||||
return input ->
|
||||
input.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
|
||||
public java.util.function.BiConsumer<KStream<Object, String>, KStream<Object, String>> process() {
|
||||
return (input0, input1) ->
|
||||
input0.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public void init(ProcessorContext context) {
|
||||
@@ -142,7 +143,7 @@ public class KafkaStreamsFunctionStateStoreTests {
|
||||
public StoreBuilder otherStore() {
|
||||
return Stores.windowStoreBuilder(
|
||||
Stores.persistentWindowStore("other-store",
|
||||
3L, 3, 3L, false), Serdes.Long(),
|
||||
Duration.ofSeconds(3), Duration.ofSeconds(3), false), Serdes.Long(),
|
||||
Serdes.Long());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.BiConsumer;
|
||||
@@ -36,6 +37,7 @@ import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -78,23 +80,34 @@ public class MultipleFunctionsInSameAppTests {
|
||||
SpringApplication app = new SpringApplication(MultipleFunctionsInSameApp.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext ignored = app.run(
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process_in.destination=purchases",
|
||||
"--spring.cloud.stream.bindings.process_out_0.destination=coffee",
|
||||
"--spring.cloud.stream.bindings.process_out_1.destination=electronics",
|
||||
"--spring.cloud.stream.bindings.analyze_in_0.destination=coffee",
|
||||
"--spring.cloud.stream.bindings.analyze_in_1.destination=electronics",
|
||||
"--spring.cloud.stream.function.definition=process;analyze",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
|
||||
"--spring.cloud.stream.bindings.process-out-1.destination=electronics",
|
||||
"--spring.cloud.stream.bindings.analyze-in-0.destination=coffee",
|
||||
"--spring.cloud.stream.bindings.analyze-in-1.destination=electronics",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.applicationId=analyze-id-0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.process.configuration.client.id=process-client",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.analyze.configuration.client.id=analyze-client",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("purchases", "coffee", "electronics");
|
||||
|
||||
StreamsBuilderFactoryBean processStreamsBuilderFactoryBean = context
|
||||
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
|
||||
|
||||
StreamsBuilderFactoryBean analyzeStreamsBuilderFactoryBean = context
|
||||
.getBean("&stream-builder-analyze", StreamsBuilderFactoryBean.class);
|
||||
|
||||
final Properties processStreamsConfiguration = processStreamsBuilderFactoryBean.getStreamsConfiguration();
|
||||
final Properties analyzeStreamsConfiguration = analyzeStreamsBuilderFactoryBean.getStreamsConfiguration();
|
||||
|
||||
assertThat(processStreamsConfiguration.getProperty("client.id")).isEqualTo("process-client");
|
||||
assertThat(analyzeStreamsConfiguration.getProperty("client.id")).isEqualTo("analyze-client");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -59,8 +59,8 @@ public class SerdesProvidedAsBeansTests {
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process_in.destination=purchases",
|
||||
"--spring.cloud.stream.bindings.process_out.destination=coffee",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=purchases",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=coffee",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=process-id-0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
@@ -77,16 +77,16 @@ public class SerdesProvidedAsBeansTests {
|
||||
final BindingServiceProperties bindingServiceProperties = context.getBean(BindingServiceProperties.class);
|
||||
final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = context.getBean(KafkaStreamsExtendedBindingProperties.class);
|
||||
|
||||
final ConsumerProperties consumerProperties = bindingServiceProperties.getBindingProperties("process_in").getConsumer();
|
||||
final KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("process_in");
|
||||
kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("process_in");
|
||||
final ConsumerProperties consumerProperties = bindingServiceProperties.getBindingProperties("process-in-0").getConsumer();
|
||||
final KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
|
||||
kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties("input");
|
||||
final Serde<?> inboundValueSerde = keyValueSerdeResolver.getInboundValueSerde(consumerProperties, kafkaStreamsConsumerProperties, resolvableType.getGeneric(0));
|
||||
|
||||
Assert.isTrue(inboundValueSerde instanceof FooSerde, "Inbound Value Serde is not matched");
|
||||
|
||||
final ProducerProperties producerProperties = bindingServiceProperties.getBindingProperties("process_out").getProducer();
|
||||
final KafkaStreamsProducerProperties kafkaStreamsProducerProperties = kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("process_out");
|
||||
kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("process_out");
|
||||
final ProducerProperties producerProperties = bindingServiceProperties.getBindingProperties("process-out-0").getProducer();
|
||||
final KafkaStreamsProducerProperties kafkaStreamsProducerProperties = kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
|
||||
kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties("output");
|
||||
final Serde<?> outboundValueSerde = keyValueSerdeResolver.getOutboundValueSerde(producerProperties, kafkaStreamsProducerProperties, resolvableType.getGeneric(1));
|
||||
|
||||
Assert.isTrue(outboundValueSerde instanceof FooSerde, "Outbound Value Serde is not matched");
|
||||
|
||||
@@ -32,14 +32,17 @@ import org.apache.kafka.common.serialization.LongSerializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.processor.TimestampExtractor;
|
||||
import org.apache.kafka.streams.processor.WallclockTimestampExtractor;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.boot.context.properties.EnableConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
@@ -69,12 +72,16 @@ public class StreamToGlobalKTableFunctionTests {
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.inputBindings.process=order,customer,product",
|
||||
"--spring.cloud.stream.function.outputBindings.process=enriched-order",
|
||||
"--spring.cloud.stream.function.definition=process",
|
||||
"--spring.cloud.stream.function.bindings.process-in-0=order",
|
||||
"--spring.cloud.stream.function.bindings.process-in-1=customer",
|
||||
"--spring.cloud.stream.function.bindings.process-in-2=product",
|
||||
"--spring.cloud.stream.function.bindings.process-out-0=enriched-order",
|
||||
"--spring.cloud.stream.bindings.order.destination=orders",
|
||||
"--spring.cloud.stream.bindings.customer.destination=customers",
|
||||
"--spring.cloud.stream.bindings.product.destination=products",
|
||||
"--spring.cloud.stream.bindings.enriched-order.destination=enriched-order",
|
||||
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
@@ -174,8 +181,57 @@ public class StreamToGlobalKTableFunctionTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTimeExtractor() throws Exception {
|
||||
SpringApplication app = new SpringApplication(OrderEnricherApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.definition=forTimeExtractorTest",
|
||||
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-0.destination=orders",
|
||||
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-1.destination=customers",
|
||||
"--spring.cloud.stream.bindings.forTimeExtractorTest-in-2.destination=products",
|
||||
"--spring.cloud.stream.bindings.forTimeExtractorTest-out-0.destination=enriched-order",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-0.consumer.timestampExtractorBeanName" +
|
||||
"=timestampExtractor",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-1.consumer.timestampExtractorBeanName" +
|
||||
"=timestampExtractor",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.forTimeExtractorTest-in-2.consumer.timestampExtractorBeanName" +
|
||||
"=timestampExtractor",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
|
||||
"testTimeExtractor-abc",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties =
|
||||
context.getBean(KafkaStreamsExtendedBindingProperties.class);
|
||||
|
||||
final Map<String, KafkaStreamsBindingProperties> bindings = kafkaStreamsExtendedBindingProperties.getBindings();
|
||||
|
||||
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties0 = bindings.get("forTimeExtractorTest-in-0");
|
||||
final String timestampExtractorBeanName0 = kafkaStreamsBindingProperties0.getConsumer().getTimestampExtractorBeanName();
|
||||
final TimestampExtractor timestampExtractor0 = context.getBean(timestampExtractorBeanName0, TimestampExtractor.class);
|
||||
assertThat(timestampExtractor0).isNotNull();
|
||||
|
||||
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties1 = bindings.get("forTimeExtractorTest-in-1");
|
||||
final String timestampExtractorBeanName1 = kafkaStreamsBindingProperties1.getConsumer().getTimestampExtractorBeanName();
|
||||
final TimestampExtractor timestampExtractor1 = context.getBean(timestampExtractorBeanName1, TimestampExtractor.class);
|
||||
assertThat(timestampExtractor1).isNotNull();
|
||||
|
||||
final KafkaStreamsBindingProperties kafkaStreamsBindingProperties2 = bindings.get("forTimeExtractorTest-in-2");
|
||||
final String timestampExtractorBeanName2 = kafkaStreamsBindingProperties2.getConsumer().getTimestampExtractorBeanName();
|
||||
final TimestampExtractor timestampExtractor2 = context.getBean(timestampExtractorBeanName2, TimestampExtractor.class);
|
||||
assertThat(timestampExtractor2).isNotNull();
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
|
||||
public static class OrderEnricherApplication {
|
||||
|
||||
@Bean
|
||||
@@ -203,6 +259,20 @@ public class StreamToGlobalKTableFunctionTests {
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public Function<KStream<Long, Order>,
|
||||
Function<KTable<Long, Customer>,
|
||||
Function<GlobalKTable<Long, Product>, KStream<Long, Order>>>> forTimeExtractorTest() {
|
||||
return orderStream ->
|
||||
customers ->
|
||||
products -> orderStream;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public TimestampExtractor timestampExtractor() {
|
||||
return new WallclockTimestampExtractor();
|
||||
}
|
||||
}
|
||||
|
||||
static class Order {
|
||||
|
||||
@@ -37,10 +37,10 @@ import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.common.serialization.StringDeserializer;
|
||||
import org.apache.kafka.common.serialization.StringSerializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.Grouped;
|
||||
import org.apache.kafka.streams.kstream.Joined;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
@@ -120,14 +120,14 @@ public class StreamToTableJoinFunctionTests {
|
||||
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process_in_0.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.process_in_1.destination=user-regions-1",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process_in_0.consumer.applicationId" +
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
|
||||
"=testStreamToTableBiConsumer",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
@@ -177,15 +177,15 @@ public class StreamToTableJoinFunctionTests {
|
||||
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.process_in_0.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.process_in_1.destination=user-regions-1",
|
||||
"--spring.cloud.stream.bindings.process_out.destination=output-topic-1",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-1",
|
||||
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-1",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=output-topic-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process_in_0.consumer.applicationId" +
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.applicationId" +
|
||||
"=StreamToTableJoinFunctionTests-abc",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
@@ -307,21 +307,20 @@ public class StreamToTableJoinFunctionTests {
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.inputBindings.process=input-1,input-2",
|
||||
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-2",
|
||||
"--spring.cloud.stream.bindings.input-2.destination=user-regions-2",
|
||||
"--spring.cloud.stream.bindings.process_out.destination=output-topic-2",
|
||||
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
|
||||
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
|
||||
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=user-clicks-2",
|
||||
"--spring.cloud.stream.bindings.process-in-1.destination=user-regions-2",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=output-topic-2",
|
||||
"--spring.cloud.stream.bindings.process-in-0.consumer.useNativeDecoding=true",
|
||||
"--spring.cloud.stream.bindings.process-in-1.consumer.useNativeDecoding=true",
|
||||
"--spring.cloud.stream.bindings.process-out-0.producer.useNativeEncoding=true",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.startOffset=earliest",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.startOffset=earliest",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id" +
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.application-id" +
|
||||
"=StreamToTableJoinFunctionTests-foobar",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
|
||||
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
|
||||
@@ -445,7 +444,7 @@ public class StreamToTableJoinFunctionTests {
|
||||
Joined.with(Serdes.String(), Serdes.Long(), null))
|
||||
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
|
||||
regionWithClicks.getClicks()))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
|
||||
.reduce(Long::sum)
|
||||
.toStream()));
|
||||
}
|
||||
@@ -462,7 +461,7 @@ public class StreamToTableJoinFunctionTests {
|
||||
Joined.with(Serdes.String(), Serdes.Long(), null))
|
||||
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
|
||||
regionWithClicks.getClicks()))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
|
||||
.groupByKey(Grouped.with(Serdes.String(), Serdes.Long()))
|
||||
.reduce(Long::sum)
|
||||
.toStream());
|
||||
}
|
||||
|
||||
@@ -34,15 +34,14 @@ import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
import org.junit.runner.RunWith;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.boot.context.properties.EnableConfigurationProperties;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.boot.test.mock.mockito.SpyBean;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.PropertySource;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
@@ -70,7 +69,11 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"DeserializationErrorHandlerByKafkaTests-out", "error.DeserializationErrorHandlerByKafkaTests-In.group", "error.word1.groupx", "error.word2.groupx");
|
||||
"DeserializationErrorHandlerByKafkaTests-In",
|
||||
"DeserializationErrorHandlerByKafkaTests-out",
|
||||
"error.DeserializationErrorHandlerByKafkaTests-In.group",
|
||||
"error.word1.groupx",
|
||||
"error.word2.groupx");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
@@ -81,7 +84,7 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
private static Consumer<String, String> consumer;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() throws Exception {
|
||||
public static void setUp() {
|
||||
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers",
|
||||
embeddedKafka.getBrokersAsString());
|
||||
|
||||
@@ -115,14 +118,13 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
extends DeserializationErrorHandlerByKafkaTests {
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void test() throws Exception {
|
||||
public void test() {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("DeserializationErrorHandlerByKafkaTests-In");
|
||||
template.sendDefault("foobar");
|
||||
template.sendDefault(1, null, "foobar");
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
|
||||
"false", embeddedKafka);
|
||||
@@ -134,7 +136,8 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"error.DeserializationErrorHandlerByKafkaTests-In.group");
|
||||
assertThat(cr.value().equals("foobar")).isTrue();
|
||||
assertThat(cr.value()).isEqualTo("foobar");
|
||||
assertThat(cr.partition()).isEqualTo(0); // custom partition function
|
||||
|
||||
// Ensuring that the deserialization was indeed done by Kafka natively
|
||||
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
|
||||
@@ -156,7 +159,6 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
extends DeserializationErrorHandlerByKafkaTests {
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void test() {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
@@ -177,14 +179,12 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "error.word1.groupx",
|
||||
"error.word2.groupx");
|
||||
|
||||
// TODO: Investigate why the ordering matters below: i.e.
|
||||
// if we consume from error.word1.groupx first, an exception is thrown.
|
||||
ConsumerRecord<String, String> cr1 = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"error.word2.groupx");
|
||||
assertThat(cr1.value().equals("foobar")).isTrue();
|
||||
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"error.word1.groupx");
|
||||
assertThat(cr2.value().equals("foobar")).isTrue();
|
||||
assertThat(cr1.value()).isEqualTo("foobar");
|
||||
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"error.word2.groupx");
|
||||
assertThat(cr2.value()).isEqualTo("foobar");
|
||||
|
||||
// Ensuring that the deserialization was indeed done by Kafka natively
|
||||
verify(conversionDelegate, never()).deserializeOnInbound(any(Class.class),
|
||||
@@ -197,12 +197,8 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
@EnableBinding(KafkaStreamsProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
@PropertySource("classpath:/org/springframework/cloud/stream/binder/kstream/integTest-1.properties")
|
||||
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
|
||||
public static class WordCountProcessorApplication {
|
||||
|
||||
@Autowired
|
||||
private TimeWindows timeWindows;
|
||||
|
||||
@StreamListener("input")
|
||||
@SendTo("output")
|
||||
public KStream<?, String> process(KStream<Object, String> input) {
|
||||
@@ -212,11 +208,16 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
|
||||
.windowedBy(timeWindows).count(Materialized.as("foo-WordCounts-x"))
|
||||
.windowedBy(TimeWindows.of(5000)).count(Materialized.as("foo-WordCounts-x"))
|
||||
.toStream().map((key, value) -> new KeyValue<>(null,
|
||||
"Count for " + key.key() + " : " + value));
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DlqPartitionFunction partitionFunction() {
|
||||
return (group, rec, ex) -> 0;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -64,6 +64,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"foos",
|
||||
"counts-id", "error.foos.foobar-group", "error.foos1.fooz-group",
|
||||
"error.foos2.fooz-group");
|
||||
|
||||
@@ -112,19 +113,19 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
|
||||
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
|
||||
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
|
||||
+ "=deserializationByBinderAndDlqTests",
|
||||
"spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqPartitions=1",
|
||||
"spring.cloud.stream.bindings.input.group=foobar-group" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
|
||||
public static class DeserializationByBinderAndDlqTests
|
||||
extends DeserializtionErrorHandlerByBinderTests {
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void test() {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("foos");
|
||||
template.sendDefault(7, "hello");
|
||||
template.sendDefault(1, 7, "hello");
|
||||
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
|
||||
"false", embeddedKafka);
|
||||
@@ -137,7 +138,8 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
|
||||
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"error.foos.foobar-group");
|
||||
assertThat(cr.value().equals("hello")).isTrue();
|
||||
assertThat(cr.value()).isEqualTo("hello");
|
||||
assertThat(cr.partition()).isEqualTo(0);
|
||||
|
||||
// Ensuring that the deserialization was indeed done by the binder
|
||||
verify(conversionDelegate).deserializeOnInbound(any(Class.class),
|
||||
|
||||
@@ -77,7 +77,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
|
||||
@Test
|
||||
public void healthIndicatorUpTest() throws Exception {
|
||||
try (ConfigurableApplicationContext context = singleStream()) {
|
||||
try (ConfigurableApplicationContext context = singleStream("ApplicationHealthTest-xyz")) {
|
||||
receive(context,
|
||||
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
|
||||
new ProducerRecord<>("in", "{\"id\":\"123\"}")),
|
||||
@@ -87,7 +87,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
|
||||
@Test
|
||||
public void healthIndicatorDownTest() throws Exception {
|
||||
try (ConfigurableApplicationContext context = singleStream()) {
|
||||
try (ConfigurableApplicationContext context = singleStream("ApplicationHealthTest-xyzabc")) {
|
||||
receive(context,
|
||||
Lists.newArrayList(new ProducerRecord<>("in", "{\"id\":\"123\"}"),
|
||||
new ProducerRecord<>("in", "{\"id\":\"124\"}")),
|
||||
@@ -186,7 +186,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
assertThat(health.getStatus()).isEqualTo(expected);
|
||||
}
|
||||
|
||||
private ConfigurableApplicationContext singleStream() {
|
||||
private ConfigurableApplicationContext singleStream(String applicationId) {
|
||||
SpringApplication app = new SpringApplication(KStreamApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
return app.run("--server.port=0", "--spring.jmx.enabled=false",
|
||||
@@ -198,7 +198,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
|
||||
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
|
||||
+ "ApplicationHealthTest-xyz",
|
||||
+ applicationId,
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
}
|
||||
|
||||
@@ -34,7 +34,7 @@ import org.springframework.stereotype.Component;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
public class MultiProcessorsWithSameNameTests {
|
||||
public class MultiProcessorsWithSameNameAndBindingTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
@@ -44,19 +44,17 @@ public class MultiProcessorsWithSameNameTests {
|
||||
.getEmbeddedKafka();
|
||||
|
||||
@Test
|
||||
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesArePresent() {
|
||||
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesAndBindingsPresent() {
|
||||
SpringApplication app = new SpringApplication(
|
||||
MultiProcessorsWithSameNameTests.WordCountProcessorApplication.class);
|
||||
MultiProcessorsWithSameNameAndBindingTests.WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.input-2.destination=words",
|
||||
"--spring.cloud.stream.bindings.input-1.destination=words",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts",
|
||||
"--spring.cloud.stream.bindings.output.contentType=application/json",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id=basic-word-count",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id=basic-word-count-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean1 = context
|
||||
@@ -83,7 +81,7 @@ public class MultiProcessorsWithSameNameTests {
|
||||
@Component
|
||||
static class Bar {
|
||||
@StreamListener
|
||||
public void process(@Input("input-2") KStream<Object, String> input) {
|
||||
public void process(@Input("input-1") KStream<Object, String> input) {
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -93,8 +91,5 @@ public class MultiProcessorsWithSameNameTests {
|
||||
@Input("input-1")
|
||||
KStream<?, ?> input1();
|
||||
|
||||
@Input("input-2")
|
||||
KStream<?, ?> input2();
|
||||
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,146 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Arrays;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.Materialized;
|
||||
import org.apache.kafka.streams.kstream.Serialized;
|
||||
import org.apache.kafka.streams.kstream.TimeWindows;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class OutboundValueNullSkippedConversionTest {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"counts");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<String, String> consumer;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer, "counts");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void tearDown() {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
// The following test verifies the fixes made for this issue:
|
||||
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/774
|
||||
@Test
|
||||
public void testOutboundNullValueIsHandledGracefully()
|
||||
throws Exception {
|
||||
SpringApplication app = new SpringApplication(
|
||||
OutboundNullApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=words",
|
||||
"--spring.cloud.stream.bindings.output.destination=counts",
|
||||
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
|
||||
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=testOutboundNullValueIsHandledGracefully",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
|
||||
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
|
||||
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
|
||||
"--spring.cloud.stream.kafka.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("words");
|
||||
template.sendDefault("foobar");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer,
|
||||
"counts");
|
||||
assertThat(cr.value() == null).isTrue();
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
static class OutboundNullApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<?, KafkaStreamsBinderWordCountIntegrationTests.WordCount> process(
|
||||
@Input("input") KStream<Object, String> input) {
|
||||
|
||||
return input
|
||||
.flatMapValues(
|
||||
value -> Arrays.asList(value.toLowerCase().split("\\W+")))
|
||||
.map((key, value) -> new KeyValue<>(value, value))
|
||||
.groupByKey(Serialized.with(Serdes.String(), Serdes.String()))
|
||||
.windowedBy(TimeWindows.of(Duration.ofSeconds(5))).count(Materialized.as("foo-WordCounts"))
|
||||
.toStream()
|
||||
.map((key, value) -> new KeyValue<>(null, null));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -14,173 +14,173 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
//package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
//
|
||||
//import java.io.IOException;
|
||||
//import java.util.Map;
|
||||
//import java.util.Random;
|
||||
//import java.util.UUID;
|
||||
//
|
||||
//import com.example.Sensor;
|
||||
//import org.apache.kafka.clients.consumer.Consumer;
|
||||
//import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
//import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
//import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
//import org.apache.kafka.common.serialization.ByteArrayDeserializer;
|
||||
//import org.apache.kafka.streams.KeyValue;
|
||||
//import org.apache.kafka.streams.kstream.KStream;
|
||||
//import org.junit.AfterClass;
|
||||
//import org.junit.BeforeClass;
|
||||
//import org.junit.ClassRule;
|
||||
//import org.junit.Ignore;
|
||||
//import org.junit.Test;
|
||||
//
|
||||
//import org.springframework.boot.SpringApplication;
|
||||
//import org.springframework.boot.WebApplicationType;
|
||||
//import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
//import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
//import org.springframework.cloud.stream.annotation.Input;
|
||||
//import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
//import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
//import org.springframework.cloud.stream.binder.kafka.streams.integration.utils.TestAvroSerializer;
|
||||
//import org.springframework.context.ConfigurableApplicationContext;
|
||||
//import org.springframework.context.annotation.Bean;
|
||||
//import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
//import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
//import org.springframework.kafka.core.KafkaTemplate;
|
||||
//import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
//import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
//import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
//import org.springframework.messaging.Message;
|
||||
//import org.springframework.messaging.converter.MessageConverter;
|
||||
//import org.springframework.messaging.handler.annotation.SendTo;
|
||||
//import org.springframework.messaging.support.MessageBuilder;
|
||||
//import org.springframework.util.MimeTypeUtils;
|
||||
//
|
||||
//import static org.assertj.core.api.Assertions.assertThat;
|
||||
//
|
||||
//
|
||||
///**
|
||||
// * @author Soby Chacko
|
||||
// */
|
||||
//public class PerRecordAvroContentTypeTests {
|
||||
//
|
||||
// @ClassRule
|
||||
// public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
// "received-sensors");
|
||||
//
|
||||
// private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
// .getEmbeddedKafka();
|
||||
//
|
||||
// private static Consumer<String, byte[]> consumer;
|
||||
//
|
||||
// @BeforeClass
|
||||
// public static void setUp() throws Exception {
|
||||
// Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("avro-ct-test",
|
||||
// "false", embeddedKafka);
|
||||
//
|
||||
// // Receive the data as byte[]
|
||||
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
// ByteArrayDeserializer.class);
|
||||
//
|
||||
// consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
// DefaultKafkaConsumerFactory<String, byte[]> cf = new DefaultKafkaConsumerFactory<>(
|
||||
// consumerProps);
|
||||
// consumer = cf.createConsumer();
|
||||
// embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "received-sensors");
|
||||
// }
|
||||
//
|
||||
// @AfterClass
|
||||
// public static void tearDown() {
|
||||
// consumer.close();
|
||||
// }
|
||||
//
|
||||
// @Test
|
||||
// @Ignore
|
||||
// public void testPerRecordAvroConentTypeAndVerifySerialization() throws Exception {
|
||||
// SpringApplication app = new SpringApplication(SensorCountAvroApplication.class);
|
||||
// app.setWebApplicationType(WebApplicationType.NONE);
|
||||
//
|
||||
// try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
// "--spring.jmx.enabled=false",
|
||||
// "--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
|
||||
// "--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
|
||||
// "--spring.cloud.stream.bindings.input.destination=sensors",
|
||||
// "--spring.cloud.stream.bindings.output.destination=received-sensors",
|
||||
// "--spring.cloud.stream.bindings.output.contentType=application/avro",
|
||||
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=per-record-avro-contentType-test",
|
||||
// "--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
// "--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
// + embeddedKafka.getBrokersAsString())) {
|
||||
//
|
||||
// Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
// // Use a custom avro test serializer
|
||||
// senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
// TestAvroSerializer.class);
|
||||
// DefaultKafkaProducerFactory<Integer, Sensor> pf = new DefaultKafkaProducerFactory<>(
|
||||
// senderProps);
|
||||
// try {
|
||||
// KafkaTemplate<Integer, Sensor> template = new KafkaTemplate<>(pf, true);
|
||||
//
|
||||
// Random random = new Random();
|
||||
// Sensor sensor = new Sensor();
|
||||
// sensor.setId(UUID.randomUUID().toString() + "-v1");
|
||||
// sensor.setAcceleration(random.nextFloat() * 10);
|
||||
// sensor.setVelocity(random.nextFloat() * 100);
|
||||
// sensor.setTemperature(random.nextFloat() * 50);
|
||||
// // Send with avro content type set.
|
||||
// Message<?> message = MessageBuilder.withPayload(sensor)
|
||||
// .setHeader("contentType", "application/avro").build();
|
||||
// template.setDefaultTopic("sensors");
|
||||
// template.send(message);
|
||||
//
|
||||
// // Serialized byte[] ^^ is received by the binding process and deserialzed
|
||||
// // it using avro converter.
|
||||
// // Then finally, the data will be output to a return topic as byte[]
|
||||
// // (using the same avro converter).
|
||||
//
|
||||
// // Receive the byte[] from return topic
|
||||
// ConsumerRecord<String, byte[]> cr = KafkaTestUtils
|
||||
// .getSingleRecord(consumer, "received-sensors");
|
||||
// final byte[] value = cr.value();
|
||||
//
|
||||
// // Convert the byte[] received back to avro object and verify that it is
|
||||
// // the same as the one we sent ^^.
|
||||
// AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
|
||||
//
|
||||
// Message<?> receivedMessage = MessageBuilder.withPayload(value)
|
||||
// .setHeader("contentType",
|
||||
// MimeTypeUtils.parseMimeType("application/avro"))
|
||||
// .build();
|
||||
// Sensor messageConverted = (Sensor) avroSchemaMessageConverter
|
||||
// .fromMessage(receivedMessage, Sensor.class);
|
||||
// assertThat(messageConverted).isEqualTo(sensor);
|
||||
// }
|
||||
// finally {
|
||||
// pf.destroy();
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// @EnableBinding(KafkaStreamsProcessor.class)
|
||||
// @EnableAutoConfiguration
|
||||
// static class SensorCountAvroApplication {
|
||||
//
|
||||
// @StreamListener
|
||||
// @SendTo("output")
|
||||
// public KStream<?, Sensor> process(@Input("input") KStream<Object, Sensor> input) {
|
||||
// // return the same Sensor object unchanged so that we can do test
|
||||
// // verifications
|
||||
// return input.map(KeyValue::new);
|
||||
// }
|
||||
//
|
||||
// @Bean
|
||||
// public MessageConverter sensorMessageConverter() throws IOException {
|
||||
// return new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
|
||||
// }
|
||||
//
|
||||
// }
|
||||
//
|
||||
//}
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.UUID;
|
||||
|
||||
import com.example.Sensor;
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.producer.ProducerConfig;
|
||||
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Ignore;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
import org.springframework.cloud.stream.annotation.EnableBinding;
|
||||
import org.springframework.cloud.stream.annotation.Input;
|
||||
import org.springframework.cloud.stream.annotation.StreamListener;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.integration.utils.TestAvroSerializer;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.messaging.Message;
|
||||
import org.springframework.messaging.converter.MessageConverter;
|
||||
import org.springframework.messaging.handler.annotation.SendTo;
|
||||
import org.springframework.messaging.support.MessageBuilder;
|
||||
import org.springframework.util.MimeTypeUtils;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class PerRecordAvroContentTypeTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"received-sensors");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
|
||||
.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<String, byte[]> consumer;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() throws Exception {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("avro-ct-test",
|
||||
"false", embeddedKafka);
|
||||
|
||||
// Receive the data as byte[]
|
||||
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
DefaultKafkaConsumerFactory<String, byte[]> cf = new DefaultKafkaConsumerFactory<>(
|
||||
consumerProps);
|
||||
consumer = cf.createConsumer();
|
||||
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "received-sensors");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void tearDown() {
|
||||
consumer.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
@Ignore
|
||||
public void testPerRecordAvroConentTypeAndVerifySerialization() throws Exception {
|
||||
SpringApplication app = new SpringApplication(SensorCountAvroApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
|
||||
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
|
||||
"--spring.cloud.stream.bindings.input.destination=sensors",
|
||||
"--spring.cloud.stream.bindings.output.destination=received-sensors",
|
||||
"--spring.cloud.stream.bindings.output.contentType=application/avro",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=per-record-avro-contentType-test",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString())) {
|
||||
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
// Use a custom avro test serializer
|
||||
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
|
||||
TestAvroSerializer.class);
|
||||
DefaultKafkaProducerFactory<Integer, Sensor> pf = new DefaultKafkaProducerFactory<>(
|
||||
senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, Sensor> template = new KafkaTemplate<>(pf, true);
|
||||
|
||||
Random random = new Random();
|
||||
Sensor sensor = new Sensor();
|
||||
sensor.setId(UUID.randomUUID().toString() + "-v1");
|
||||
sensor.setAcceleration(random.nextFloat() * 10);
|
||||
sensor.setVelocity(random.nextFloat() * 100);
|
||||
sensor.setTemperature(random.nextFloat() * 50);
|
||||
// Send with avro content type set.
|
||||
Message<?> message = MessageBuilder.withPayload(sensor)
|
||||
.setHeader("contentType", "application/avro").build();
|
||||
template.setDefaultTopic("sensors");
|
||||
template.send(message);
|
||||
|
||||
// Serialized byte[] ^^ is received by the binding process and deserialzed
|
||||
// it using avro converter.
|
||||
// Then finally, the data will be output to a return topic as byte[]
|
||||
// (using the same avro converter).
|
||||
|
||||
// Receive the byte[] from return topic
|
||||
ConsumerRecord<String, byte[]> cr = KafkaTestUtils
|
||||
.getSingleRecord(consumer, "received-sensors");
|
||||
final byte[] value = cr.value();
|
||||
|
||||
// Convert the byte[] received back to avro object and verify that it is
|
||||
// the same as the one we sent ^^.
|
||||
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter();
|
||||
|
||||
Message<?> receivedMessage = MessageBuilder.withPayload(value)
|
||||
.setHeader("contentType",
|
||||
MimeTypeUtils.parseMimeType("application/avro"))
|
||||
.build();
|
||||
Sensor messageConverted = (Sensor) avroSchemaMessageConverter
|
||||
.fromMessage(receivedMessage, Sensor.class);
|
||||
assertThat(messageConverted).isEqualTo(sensor);
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@EnableBinding(KafkaStreamsProcessor.class)
|
||||
@EnableAutoConfiguration
|
||||
static class SensorCountAvroApplication {
|
||||
|
||||
@StreamListener
|
||||
@SendTo("output")
|
||||
public KStream<?, Sensor> process(@Input("input") KStream<Object, Sensor> input) {
|
||||
// return the same Sensor object unchanged so that we can do test
|
||||
// verifications
|
||||
return input.map(KeyValue::new);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public MessageConverter sensorMessageConverter() throws IOException {
|
||||
return new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -14,50 +14,50 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
//package org.springframework.cloud.stream.binder.kafka.streams.integration.utils;
|
||||
//
|
||||
//import java.util.HashMap;
|
||||
//import java.util.Map;
|
||||
//
|
||||
//import org.apache.kafka.common.serialization.Serializer;
|
||||
//
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
//import org.springframework.messaging.Message;
|
||||
//import org.springframework.messaging.MessageHeaders;
|
||||
//import org.springframework.messaging.support.MessageBuilder;
|
||||
//
|
||||
///**
|
||||
// * Custom avro serializer intended to be used for testing only.
|
||||
// *
|
||||
// * @param <S> Target type to serialize
|
||||
// * @author Soby Chacko
|
||||
// */
|
||||
//public class TestAvroSerializer<S> implements Serializer<S> {
|
||||
//
|
||||
// public TestAvroSerializer() {
|
||||
// }
|
||||
//
|
||||
// @Override
|
||||
// public void configure(Map<String, ?> configs, boolean isKey) {
|
||||
//
|
||||
// }
|
||||
//
|
||||
// @Override
|
||||
// public byte[] serialize(String topic, S data) {
|
||||
// AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
|
||||
// Message<?> message = MessageBuilder.withPayload(data).build();
|
||||
// Map<String, Object> headers = new HashMap<>(message.getHeaders());
|
||||
// headers.put(MessageHeaders.CONTENT_TYPE, "application/avro");
|
||||
// MessageHeaders messageHeaders = new MessageHeaders(headers);
|
||||
// final Object payload = avroSchemaMessageConverter
|
||||
// .toMessage(message.getPayload(), messageHeaders).getPayload();
|
||||
// return (byte[]) payload;
|
||||
// }
|
||||
//
|
||||
// @Override
|
||||
// public void close() {
|
||||
//
|
||||
// }
|
||||
//
|
||||
//}
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.integration.utils;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.common.serialization.Serializer;
|
||||
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
import org.springframework.messaging.Message;
|
||||
import org.springframework.messaging.MessageHeaders;
|
||||
import org.springframework.messaging.support.MessageBuilder;
|
||||
|
||||
/**
|
||||
* Custom avro serializer intended to be used for testing only.
|
||||
*
|
||||
* @param <S> Target type to serialize
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class TestAvroSerializer<S> implements Serializer<S> {
|
||||
|
||||
public TestAvroSerializer() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void configure(Map<String, ?> configs, boolean isKey) {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] serialize(String topic, S data) {
|
||||
AvroSchemaMessageConverter avroSchemaMessageConverter = new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl());
|
||||
Message<?> message = MessageBuilder.withPayload(data).build();
|
||||
Map<String, Object> headers = new HashMap<>(message.getHeaders());
|
||||
headers.put(MessageHeaders.CONTENT_TYPE, "application/avro");
|
||||
MessageHeaders messageHeaders = new MessageHeaders(headers);
|
||||
final Object payload = avroSchemaMessageConverter
|
||||
.toMessage(message.getPayload(), messageHeaders).getPayload();
|
||||
return (byte[]) payload;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -14,63 +14,63 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
//package org.springframework.cloud.stream.binder.kafka.streams.serde;
|
||||
//
|
||||
//import java.util.ArrayList;
|
||||
//import java.util.HashMap;
|
||||
//import java.util.List;
|
||||
//import java.util.Map;
|
||||
//import java.util.Random;
|
||||
//import java.util.UUID;
|
||||
//
|
||||
//import com.example.Sensor;
|
||||
//import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
//import org.junit.Ignore;
|
||||
//import org.junit.Test;
|
||||
//
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
//import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
//import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
|
||||
//import org.springframework.messaging.converter.MessageConverter;
|
||||
//
|
||||
//import static org.assertj.core.api.Assertions.assertThat;
|
||||
//
|
||||
///**
|
||||
// * Refer {@link MessageConverterDelegateSerde} for motivations.
|
||||
// *
|
||||
// * @author Soby Chacko
|
||||
// */
|
||||
//public class MessageConverterDelegateSerdeTest {
|
||||
//
|
||||
// @Test
|
||||
// @SuppressWarnings("unchecked")
|
||||
// @Ignore
|
||||
// public void testCompositeNonNativeSerdeUsingAvroContentType() {
|
||||
// Random random = new Random();
|
||||
// Sensor sensor = new Sensor();
|
||||
// sensor.setId(UUID.randomUUID().toString() + "-v1");
|
||||
// sensor.setAcceleration(random.nextFloat() * 10);
|
||||
// sensor.setVelocity(random.nextFloat() * 100);
|
||||
// sensor.setTemperature(random.nextFloat() * 50);
|
||||
//
|
||||
// List<MessageConverter> messageConverters = new ArrayList<>();
|
||||
// messageConverters.add(new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl()));
|
||||
// CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
|
||||
// messageConverters, new ObjectMapper());
|
||||
// MessageConverterDelegateSerde messageConverterDelegateSerde = new MessageConverterDelegateSerde(
|
||||
// compositeMessageConverterFactory.getMessageConverterForAllRegistered());
|
||||
//
|
||||
// Map<String, Object> configs = new HashMap<>();
|
||||
// configs.put("valueClass", Sensor.class);
|
||||
// configs.put("contentType", "application/avro");
|
||||
// messageConverterDelegateSerde.configure(configs, false);
|
||||
// final byte[] serialized = messageConverterDelegateSerde.serializer().serialize(null,
|
||||
// sensor);
|
||||
//
|
||||
// final Object deserialized = messageConverterDelegateSerde.deserializer()
|
||||
// .deserialize(null, serialized);
|
||||
//
|
||||
// assertThat(deserialized).isEqualTo(sensor);
|
||||
// }
|
||||
//
|
||||
//}
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.serde;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.UUID;
|
||||
|
||||
import com.example.Sensor;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.Ignore;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaMessageConverter;
|
||||
import org.springframework.cloud.schema.registry.avro.AvroSchemaServiceManagerImpl;
|
||||
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
|
||||
import org.springframework.messaging.converter.MessageConverter;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* Refer {@link MessageConverterDelegateSerde} for motivations.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class MessageConverterDelegateSerdeTest {
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
@Ignore
|
||||
public void testCompositeNonNativeSerdeUsingAvroContentType() {
|
||||
Random random = new Random();
|
||||
Sensor sensor = new Sensor();
|
||||
sensor.setId(UUID.randomUUID().toString() + "-v1");
|
||||
sensor.setAcceleration(random.nextFloat() * 10);
|
||||
sensor.setVelocity(random.nextFloat() * 100);
|
||||
sensor.setTemperature(random.nextFloat() * 50);
|
||||
|
||||
List<MessageConverter> messageConverters = new ArrayList<>();
|
||||
messageConverters.add(new AvroSchemaMessageConverter(new AvroSchemaServiceManagerImpl()));
|
||||
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
|
||||
messageConverters, new ObjectMapper());
|
||||
MessageConverterDelegateSerde messageConverterDelegateSerde = new MessageConverterDelegateSerde(
|
||||
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
|
||||
|
||||
Map<String, Object> configs = new HashMap<>();
|
||||
configs.put("valueClass", Sensor.class);
|
||||
configs.put("contentType", "application/avro");
|
||||
messageConverterDelegateSerde.configure(configs, false);
|
||||
final byte[] serialized = messageConverterDelegateSerde.serializer().serialize(null,
|
||||
sensor);
|
||||
|
||||
final Object deserialized = messageConverterDelegateSerde.deserializer()
|
||||
.deserialize(null, serialized);
|
||||
|
||||
assertThat(deserialized).isEqualTo(sensor);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -4,5 +4,3 @@ spring.cloud.stream.bindings.output.contentType=application/json
|
||||
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
|
||||
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
|
||||
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
|
||||
spring.cloud.stream.kafka.streams.timeWindow.length=5000
|
||||
spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.0.0.M4</version>
|
||||
<version>3.0.0.RC2</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
||||
@@ -0,0 +1,467 @@
|
||||
/*
|
||||
* Copyright 2017-2019 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedHashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.DeserializationContext;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.deser.std.StdNodeBasedDeserializer;
|
||||
import com.fasterxml.jackson.databind.module.SimpleModule;
|
||||
import com.fasterxml.jackson.databind.node.TextNode;
|
||||
import com.fasterxml.jackson.databind.type.TypeFactory;
|
||||
import org.apache.kafka.common.header.Header;
|
||||
import org.apache.kafka.common.header.Headers;
|
||||
import org.apache.kafka.common.header.internals.RecordHeader;
|
||||
|
||||
import org.springframework.kafka.support.AbstractKafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.JacksonUtils;
|
||||
import org.springframework.lang.Nullable;
|
||||
import org.springframework.messaging.MessageHeaders;
|
||||
import org.springframework.util.Assert;
|
||||
import org.springframework.util.ClassUtils;
|
||||
import org.springframework.util.MimeType;
|
||||
|
||||
/**
|
||||
* Custom header mapper for Apache Kafka. This is identical to the {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}
|
||||
* from spring Kafka. This is provided for addressing some interoperability issues between Spring Cloud Stream 3.0.x
|
||||
* and 2.x apps, where mime types passed as regular {@link MimeType} in the header are not de-serialized properly.
|
||||
* Once those concerns are addressed in Spring Kafka, we will deprecate this class and remove it in a future binder release.
|
||||
*
|
||||
* Most headers in {@link org.springframework.kafka.support.KafkaHeaders} are not mapped onto outbound messages.
|
||||
* The exceptions are correlation and reply headers for request/reply
|
||||
* messaging.
|
||||
* Header types are added to a special header {@link #JSON_TYPES}.
|
||||
*
|
||||
* @author Gary Russell
|
||||
* @author Artem Bilan
|
||||
* @author Soby Chacko
|
||||
*
|
||||
* @since 3.0.0
|
||||
*
|
||||
*/
|
||||
public class BinderHeaderMapper extends AbstractKafkaHeaderMapper {
|
||||
|
||||
private static final String JAVA_LANG_STRING = "java.lang.String";
|
||||
|
||||
private static final List<String> DEFAULT_TRUSTED_PACKAGES =
|
||||
Arrays.asList(
|
||||
"java.lang",
|
||||
"java.net",
|
||||
"java.util",
|
||||
"org.springframework.util"
|
||||
);
|
||||
|
||||
private static final List<String> DEFAULT_TO_STRING_CLASSES =
|
||||
Arrays.asList(
|
||||
"org.springframework.util.MimeType",
|
||||
"org.springframework.http.MediaType"
|
||||
);
|
||||
|
||||
/**
|
||||
* Header name for java types of other headers.
|
||||
*/
|
||||
public static final String JSON_TYPES = "spring_json_header_types";
|
||||
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
private final Set<String> trustedPackages = new LinkedHashSet<>(DEFAULT_TRUSTED_PACKAGES);
|
||||
|
||||
private final Set<String> toStringClasses = new LinkedHashSet<>(DEFAULT_TO_STRING_CLASSES);
|
||||
|
||||
private boolean encodeStrings;
|
||||
|
||||
/**
|
||||
* Construct an instance with the default object mapper and default header patterns
|
||||
* for outbound headers; all inbound headers are mapped. The default pattern list is
|
||||
* {@code "!id", "!timestamp" and "*"}. In addition, most of the headers in
|
||||
* {@link org.springframework.kafka.support.KafkaHeaders} are never mapped as headers since they represent data in
|
||||
* consumer/producer records.
|
||||
* @see #BinderHeaderMapper(ObjectMapper)
|
||||
*/
|
||||
public BinderHeaderMapper() {
|
||||
this(JacksonUtils.enhancedObjectMapper());
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct an instance with the provided object mapper and default header patterns
|
||||
* for outbound headers; all inbound headers are mapped. The patterns are applied in
|
||||
* order, stopping on the first match (positive or negative). Patterns are negated by
|
||||
* preceding them with "!". The default pattern list is
|
||||
* {@code "!id", "!timestamp" and "*"}. In addition, most of the headers in
|
||||
* {@link org.springframework.kafka.support.KafkaHeaders} are never mapped as headers since they represent data in
|
||||
* consumer/producer records.
|
||||
* @param objectMapper the object mapper.
|
||||
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
|
||||
*/
|
||||
public BinderHeaderMapper(ObjectMapper objectMapper) {
|
||||
this(objectMapper,
|
||||
"!" + MessageHeaders.ID,
|
||||
"!" + MessageHeaders.TIMESTAMP,
|
||||
"*");
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct an instance with a default object mapper and the provided header patterns
|
||||
* for outbound headers; all inbound headers are mapped. The patterns are applied in
|
||||
* order, stopping on the first match (positive or negative). Patterns are negated by
|
||||
* preceding them with "!". The patterns will replace the default patterns; you
|
||||
* generally should not map the {@code "id" and "timestamp"} headers. Note:
|
||||
* most of the headers in {@link org.springframework.kafka.support.KafkaHeaders} are ever mapped as headers since they
|
||||
* represent data in consumer/producer records.
|
||||
* @param patterns the patterns.
|
||||
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
|
||||
*/
|
||||
public BinderHeaderMapper(String... patterns) {
|
||||
this(new ObjectMapper(), patterns);
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct an instance with the provided object mapper and the provided header
|
||||
* patterns for outbound headers; all inbound headers are mapped. The patterns are
|
||||
* applied in order, stopping on the first match (positive or negative). Patterns are
|
||||
* negated by preceding them with "!". The patterns will replace the default patterns;
|
||||
* you generally should not map the {@code "id" and "timestamp"} headers. Note: most
|
||||
* of the headers in {@link org.springframework.kafka.support.KafkaHeaders} are never mapped as headers since they
|
||||
* represent data in consumer/producer records.
|
||||
* @param objectMapper the object mapper.
|
||||
* @param patterns the patterns.
|
||||
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
|
||||
*/
|
||||
public BinderHeaderMapper(ObjectMapper objectMapper, String... patterns) {
|
||||
super(patterns);
|
||||
Assert.notNull(objectMapper, "'objectMapper' must not be null");
|
||||
Assert.noNullElements(patterns, "'patterns' must not have null elements");
|
||||
this.objectMapper = objectMapper;
|
||||
this.objectMapper
|
||||
.registerModule(new SimpleModule().addDeserializer(MimeType.class, new MimeTypeJsonDeserializer()));
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the object mapper.
|
||||
* @return the mapper.
|
||||
*/
|
||||
protected ObjectMapper getObjectMapper() {
|
||||
return this.objectMapper;
|
||||
}
|
||||
|
||||
/**
|
||||
* Provide direct access to the trusted packages set for subclasses.
|
||||
* @return the trusted packages.
|
||||
* @since 2.2
|
||||
*/
|
||||
protected Set<String> getTrustedPackages() {
|
||||
return this.trustedPackages;
|
||||
}
|
||||
|
||||
/**
|
||||
* Provide direct access to the toString() classes by subclasses.
|
||||
* @return the toString() classes.
|
||||
* @since 2.2
|
||||
*/
|
||||
protected Set<String> getToStringClasses() {
|
||||
return this.toStringClasses;
|
||||
}
|
||||
|
||||
protected boolean isEncodeStrings() {
|
||||
return this.encodeStrings;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set to true to encode String-valued headers as JSON ("..."), by default just the
|
||||
* raw String value is converted to a byte array using the configured charset. Set to
|
||||
* true if a consumer of the outbound record is using Spring for Apache Kafka version
|
||||
* less than 2.3
|
||||
* @param encodeStrings true to encode (default false).
|
||||
* @since 2.3
|
||||
*/
|
||||
public void setEncodeStrings(boolean encodeStrings) {
|
||||
this.encodeStrings = encodeStrings;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add packages to the trusted packages list (default {@code java.util, java.lang}) used
|
||||
* when constructing objects from JSON.
|
||||
* If any of the supplied packages is {@code "*"}, all packages are trusted.
|
||||
* If a class for a non-trusted package is encountered, the header is returned to the
|
||||
* application with value of type {@link NonTrustedHeaderType}.
|
||||
* @param packagesToTrust the packages to trust.
|
||||
*/
|
||||
public void addTrustedPackages(String... packagesToTrust) {
|
||||
if (packagesToTrust != null) {
|
||||
for (String whiteList : packagesToTrust) {
|
||||
if ("*".equals(whiteList)) {
|
||||
this.trustedPackages.clear();
|
||||
break;
|
||||
}
|
||||
else {
|
||||
this.trustedPackages.add(whiteList);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add class names that the outbound mapper should perform toString() operations on
|
||||
* before mapping.
|
||||
* @param classNames the class names.
|
||||
* @since 2.2
|
||||
*/
|
||||
public void addToStringClasses(String... classNames) {
|
||||
this.toStringClasses.addAll(Arrays.asList(classNames));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void fromHeaders(MessageHeaders headers, Headers target) {
|
||||
final Map<String, String> jsonHeaders = new HashMap<>();
|
||||
final ObjectMapper headerObjectMapper = getObjectMapper();
|
||||
headers.forEach((key, rawValue) -> {
|
||||
if (matches(key, rawValue)) {
|
||||
Object valueToAdd = headerValueToAddOut(key, rawValue);
|
||||
if (valueToAdd instanceof byte[]) {
|
||||
target.add(new RecordHeader(key, (byte[]) valueToAdd));
|
||||
}
|
||||
else {
|
||||
try {
|
||||
String className = valueToAdd.getClass().getName();
|
||||
if (this.toStringClasses.contains(className)) {
|
||||
valueToAdd = valueToAdd.toString();
|
||||
className = JAVA_LANG_STRING;
|
||||
}
|
||||
if (!this.encodeStrings
|
||||
&& !MimeType.class.isAssignableFrom(rawValue.getClass())
|
||||
&& valueToAdd instanceof String) {
|
||||
target.add(new RecordHeader(key, ((String) valueToAdd).getBytes(getCharset())));
|
||||
className = JAVA_LANG_STRING;
|
||||
}
|
||||
else {
|
||||
target.add(new RecordHeader(key, headerObjectMapper.writeValueAsBytes(valueToAdd)));
|
||||
}
|
||||
jsonHeaders.put(key, className);
|
||||
}
|
||||
catch (Exception e) {
|
||||
logger.debug(e, () -> "Could not map " + key + " with type " + rawValue.getClass().getName());
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
if (jsonHeaders.size() > 0) {
|
||||
try {
|
||||
target.add(new RecordHeader(JSON_TYPES, headerObjectMapper.writeValueAsBytes(jsonHeaders)));
|
||||
}
|
||||
catch (IllegalStateException | JsonProcessingException e) {
|
||||
logger.error(e, "Could not add json types header");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void toHeaders(Headers source, final Map<String, Object> headers) {
|
||||
final Map<String, String> jsonTypes = decodeJsonTypes(source);
|
||||
source.forEach(header -> {
|
||||
if (!(header.key().equals(JSON_TYPES))) {
|
||||
if (jsonTypes != null && jsonTypes.containsKey(header.key())) {
|
||||
String requestedType = jsonTypes.get(header.key());
|
||||
populateJsonValueHeader(header, requestedType, headers);
|
||||
}
|
||||
else {
|
||||
headers.put(header.key(), headerValueToAddIn(header));
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private void populateJsonValueHeader(Header header, String requestedType, Map<String, Object> headers) {
|
||||
Class<?> type = Object.class;
|
||||
boolean trusted = false;
|
||||
try {
|
||||
trusted = trusted(requestedType);
|
||||
if (trusted) {
|
||||
type = ClassUtils.forName(requestedType, null);
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
logger.error(e, () -> "Could not load class for header: " + header.key());
|
||||
}
|
||||
if (String.class.equals(type) && (header.value().length == 0 || header.value()[0] != '"')) {
|
||||
headers.put(header.key(), new String(header.value(), getCharset()));
|
||||
}
|
||||
else {
|
||||
if (trusted) {
|
||||
try {
|
||||
Object value = decodeValue(header, type);
|
||||
headers.put(header.key(), value);
|
||||
}
|
||||
catch (IOException e) {
|
||||
logger.error(e, () ->
|
||||
"Could not decode json type: " + new String(header.value()) + " for key: "
|
||||
+ header.key());
|
||||
headers.put(header.key(), header.value());
|
||||
}
|
||||
}
|
||||
else {
|
||||
headers.put(header.key(), new NonTrustedHeaderType(header.value(), requestedType));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private Object decodeValue(Header h, Class<?> type) throws IOException, LinkageError {
|
||||
ObjectMapper headerObjectMapper = getObjectMapper();
|
||||
Object value = headerObjectMapper.readValue(h.value(), type);
|
||||
if (type.equals(NonTrustedHeaderType.class)) {
|
||||
// Upstream NTHT propagated; may be trusted here...
|
||||
NonTrustedHeaderType nth = (NonTrustedHeaderType) value;
|
||||
if (trusted(nth.getUntrustedType())) {
|
||||
try {
|
||||
value = headerObjectMapper.readValue(nth.getHeaderValue(),
|
||||
ClassUtils.forName(nth.getUntrustedType(), null));
|
||||
}
|
||||
catch (Exception e) {
|
||||
logger.error(e, () -> "Could not decode header: " + nth);
|
||||
}
|
||||
}
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
@Nullable
|
||||
private Map<String, String> decodeJsonTypes(Headers source) {
|
||||
Map<String, String> types = null;
|
||||
Header jsonTypes = source.lastHeader(JSON_TYPES);
|
||||
if (jsonTypes != null) {
|
||||
ObjectMapper headerObjectMapper = getObjectMapper();
|
||||
try {
|
||||
types = headerObjectMapper.readValue(jsonTypes.value(), Map.class);
|
||||
}
|
||||
catch (IOException e) {
|
||||
logger.error(e, () -> "Could not decode json types: " + new String(jsonTypes.value()));
|
||||
}
|
||||
}
|
||||
return types;
|
||||
}
|
||||
|
||||
protected boolean trusted(String requestedType) {
|
||||
if (requestedType.equals(NonTrustedHeaderType.class.getName())) {
|
||||
return true;
|
||||
}
|
||||
if (!this.trustedPackages.isEmpty()) {
|
||||
int lastDot = requestedType.lastIndexOf('.');
|
||||
if (lastDot < 0) {
|
||||
return false;
|
||||
}
|
||||
String packageName = requestedType.substring(0, lastDot);
|
||||
for (String trustedPackage : this.trustedPackages) {
|
||||
if (packageName.equals(trustedPackage) || packageName.startsWith(trustedPackage + ".")) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* The {@link StdNodeBasedDeserializer} extension for {@link MimeType} deserialization.
|
||||
* It is presented here for backward compatibility when older producers send {@link MimeType}
|
||||
* headers as serialization version.
|
||||
*/
|
||||
private class MimeTypeJsonDeserializer extends StdNodeBasedDeserializer<MimeType> {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
MimeTypeJsonDeserializer() {
|
||||
super(MimeType.class);
|
||||
}
|
||||
|
||||
@Override
|
||||
public MimeType convert(JsonNode root, DeserializationContext ctxt) throws IOException {
|
||||
if (root instanceof TextNode) {
|
||||
return MimeType.valueOf(root.asText());
|
||||
}
|
||||
else {
|
||||
JsonNode type = root.get("type");
|
||||
JsonNode subType = root.get("subtype");
|
||||
JsonNode parameters = root.get("parameters");
|
||||
Map<String, String> params =
|
||||
BinderHeaderMapper.this.objectMapper.readValue(parameters.traverse(),
|
||||
TypeFactory.defaultInstance()
|
||||
.constructMapType(HashMap.class, String.class, String.class));
|
||||
return new MimeType(type.asText(), subType.asText(), params);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Represents a header that could not be decoded due to an untrusted type.
|
||||
*/
|
||||
public static class NonTrustedHeaderType {
|
||||
|
||||
private byte[] headerValue;
|
||||
|
||||
private String untrustedType;
|
||||
|
||||
public NonTrustedHeaderType() {
|
||||
super();
|
||||
}
|
||||
|
||||
NonTrustedHeaderType(byte[] headerValue, String untrustedType) { // NOSONAR
|
||||
this.headerValue = headerValue; // NOSONAR
|
||||
this.untrustedType = untrustedType;
|
||||
}
|
||||
|
||||
|
||||
public void setHeaderValue(byte[] headerValue) { // NOSONAR
|
||||
this.headerValue = headerValue; // NOSONAR array reference
|
||||
}
|
||||
|
||||
public byte[] getHeaderValue() {
|
||||
return this.headerValue; // NOSONAR
|
||||
}
|
||||
|
||||
public void setUntrustedType(String untrustedType) {
|
||||
this.untrustedType = untrustedType;
|
||||
}
|
||||
|
||||
public String getUntrustedType() {
|
||||
return this.untrustedType;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
try {
|
||||
return "NonTrustedHeaderType [headerValue=" + new String(this.headerValue, StandardCharsets.UTF_8)
|
||||
+ ", untrustedType=" + this.untrustedType + "]";
|
||||
}
|
||||
catch (@SuppressWarnings("unused") Exception e) {
|
||||
return "NonTrustedHeaderType [headerValue=" + Arrays.toString(this.headerValue) + ", untrustedType="
|
||||
+ this.untrustedType + "]";
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
@@ -98,22 +99,30 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized Consumer<?, ?> initMetadataConsumer() {
|
||||
if (this.metadataConsumer == null) {
|
||||
this.metadataConsumer = this.consumerFactory.createConsumer();
|
||||
}
|
||||
return this.metadataConsumer;
|
||||
}
|
||||
|
||||
private Health buildHealthStatus() {
|
||||
try {
|
||||
if (this.metadataConsumer == null) {
|
||||
synchronized (KafkaBinderHealthIndicator.this) {
|
||||
if (this.metadataConsumer == null) {
|
||||
this.metadataConsumer = this.consumerFactory.createConsumer();
|
||||
}
|
||||
}
|
||||
}
|
||||
initMetadataConsumer();
|
||||
synchronized (this.metadataConsumer) {
|
||||
Set<String> downMessages = new HashSet<>();
|
||||
final Map<String, KafkaMessageChannelBinder.TopicInformation> topicsInUse = KafkaBinderHealthIndicator.this.binder
|
||||
.getTopicsInUse();
|
||||
if (topicsInUse.isEmpty()) {
|
||||
return Health.down().withDetail("No topic information available",
|
||||
"Kafka broker is not reachable").build();
|
||||
try {
|
||||
this.metadataConsumer.listTopics(Duration.ofSeconds(this.timeout));
|
||||
}
|
||||
catch (Exception e) {
|
||||
return Health.down().withDetail("No topic information available",
|
||||
"Kafka broker is not reachable").build();
|
||||
}
|
||||
return Health.unknown().withDetail("No bindings found",
|
||||
"Kafka binder may not be bound to destinations on the broker").build();
|
||||
}
|
||||
else {
|
||||
for (String topic : topicsInUse.keySet()) {
|
||||
|
||||
@@ -174,31 +174,26 @@ public class KafkaBinderMetrics
|
||||
}
|
||||
}
|
||||
|
||||
private ConsumerFactory<?, ?> createConsumerFactory() {
|
||||
private synchronized ConsumerFactory<?, ?> createConsumerFactory() {
|
||||
if (this.defaultConsumerFactory == null) {
|
||||
synchronized (this) {
|
||||
if (this.defaultConsumerFactory == null) {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
Map<String, Object> mergedConfig = this.binderConfigurationProperties
|
||||
.mergedConsumerConfiguration();
|
||||
if (!ObjectUtils.isEmpty(mergedConfig)) {
|
||||
props.putAll(mergedConfig);
|
||||
}
|
||||
if (!props.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
|
||||
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
this.binderConfigurationProperties
|
||||
.getKafkaConnectionString());
|
||||
}
|
||||
this.defaultConsumerFactory = new DefaultKafkaConsumerFactory<>(
|
||||
props);
|
||||
}
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
|
||||
ByteArrayDeserializer.class);
|
||||
Map<String, Object> mergedConfig = this.binderConfigurationProperties
|
||||
.mergedConsumerConfiguration();
|
||||
if (!ObjectUtils.isEmpty(mergedConfig)) {
|
||||
props.putAll(mergedConfig);
|
||||
}
|
||||
if (!props.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
|
||||
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
this.binderConfigurationProperties
|
||||
.getKafkaConnectionString());
|
||||
}
|
||||
this.defaultConsumerFactory = new DefaultKafkaConsumerFactory<>(
|
||||
props);
|
||||
}
|
||||
|
||||
return this.defaultConsumerFactory;
|
||||
}
|
||||
|
||||
|
||||
@@ -51,6 +51,7 @@ import org.apache.kafka.common.header.internals.RecordHeaders;
|
||||
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
|
||||
import org.apache.kafka.common.serialization.ByteArraySerializer;
|
||||
|
||||
import org.springframework.beans.BeansException;
|
||||
import org.springframework.beans.factory.DisposableBean;
|
||||
import org.springframework.beans.factory.NoSuchBeanDefinitionException;
|
||||
import org.springframework.cloud.stream.binder.AbstractMessageChannelBinder;
|
||||
@@ -69,6 +70,7 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerPro
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
|
||||
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
|
||||
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
|
||||
@@ -99,7 +101,6 @@ import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
|
||||
import org.springframework.kafka.listener.ConsumerProperties;
|
||||
import org.springframework.kafka.listener.ContainerProperties;
|
||||
import org.springframework.kafka.support.DefaultKafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaders;
|
||||
import org.springframework.kafka.support.ProducerListener;
|
||||
@@ -198,6 +199,8 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
private final KafkaBindingRebalanceListener rebalanceListener;
|
||||
|
||||
private final DlqPartitionFunction dlqPartitionFunction;
|
||||
|
||||
private ProducerListener<byte[], byte[]> producerListener;
|
||||
|
||||
private KafkaExtendedBindingProperties extendedBindingProperties = new KafkaExtendedBindingProperties();
|
||||
@@ -206,7 +209,7 @@ public class KafkaMessageChannelBinder extends
|
||||
KafkaBinderConfigurationProperties configurationProperties,
|
||||
KafkaTopicProvisioner provisioningProvider) {
|
||||
|
||||
this(configurationProperties, provisioningProvider, null, null, null);
|
||||
this(configurationProperties, provisioningProvider, null, null, null, null);
|
||||
}
|
||||
|
||||
public KafkaMessageChannelBinder(
|
||||
@@ -215,7 +218,7 @@ public class KafkaMessageChannelBinder extends
|
||||
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> containerCustomizer,
|
||||
KafkaBindingRebalanceListener rebalanceListener) {
|
||||
|
||||
this(configurationProperties, provisioningProvider, containerCustomizer, null, rebalanceListener);
|
||||
this(configurationProperties, provisioningProvider, containerCustomizer, null, rebalanceListener, null);
|
||||
}
|
||||
|
||||
public KafkaMessageChannelBinder(
|
||||
@@ -223,7 +226,8 @@ public class KafkaMessageChannelBinder extends
|
||||
KafkaTopicProvisioner provisioningProvider,
|
||||
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> containerCustomizer,
|
||||
MessageSourceCustomizer<KafkaMessageSource<?, ?>> sourceCustomizer,
|
||||
KafkaBindingRebalanceListener rebalanceListener) {
|
||||
KafkaBindingRebalanceListener rebalanceListener,
|
||||
DlqPartitionFunction dlqPartitionFunction) {
|
||||
|
||||
super(headersToMap(configurationProperties), provisioningProvider,
|
||||
containerCustomizer, sourceCustomizer);
|
||||
@@ -239,6 +243,9 @@ public class KafkaMessageChannelBinder extends
|
||||
this.transactionManager = null;
|
||||
}
|
||||
this.rebalanceListener = rebalanceListener;
|
||||
this.dlqPartitionFunction = dlqPartitionFunction != null
|
||||
? dlqPartitionFunction
|
||||
: null;
|
||||
}
|
||||
|
||||
private static String[] headersToMap(
|
||||
@@ -377,12 +384,23 @@ public class KafkaMessageChannelBinder extends
|
||||
if (StringUtils.hasText(producerProperties.getExtension().getRecordMetadataChannel())) {
|
||||
handler.setSendSuccessChannelName(producerProperties.getExtension().getRecordMetadataChannel());
|
||||
}
|
||||
|
||||
KafkaHeaderMapper mapper = null;
|
||||
if (this.configurationProperties.getHeaderMapperBeanName() != null) {
|
||||
mapper = getApplicationContext().getBean(
|
||||
this.configurationProperties.getHeaderMapperBeanName(),
|
||||
KafkaHeaderMapper.class);
|
||||
}
|
||||
if (mapper == null) {
|
||||
//First, try to see if there is a bean named headerMapper registered by other frameworks using the binder (for e.g. spring cloud sleuth)
|
||||
try {
|
||||
mapper = getApplicationContext().getBean("kafkaBinderHeaderMapper", KafkaHeaderMapper.class);
|
||||
}
|
||||
catch (BeansException be) {
|
||||
// Pass through
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Even if the user configures a bean, we must not use it if the header mode is
|
||||
* not the default (headers); setting the mapper to null disables populating
|
||||
@@ -403,11 +421,11 @@ public class KafkaMessageChannelBinder extends
|
||||
if (!patterns.contains("!" + MessageHeaders.ID)) {
|
||||
patterns.add(0, "!" + MessageHeaders.ID);
|
||||
}
|
||||
mapper = new DefaultKafkaHeaderMapper(
|
||||
mapper = new BinderHeaderMapper(
|
||||
patterns.toArray(new String[patterns.size()]));
|
||||
}
|
||||
else {
|
||||
mapper = new DefaultKafkaHeaderMapper();
|
||||
mapper = new BinderHeaderMapper();
|
||||
}
|
||||
}
|
||||
handler.setHeaderMapper(mapper);
|
||||
@@ -899,23 +917,29 @@ public class KafkaMessageChannelBinder extends
|
||||
KafkaHeaderMapper.class);
|
||||
}
|
||||
if (mapper == null) {
|
||||
DefaultKafkaHeaderMapper headerMapper = new DefaultKafkaHeaderMapper() {
|
||||
|
||||
@Override
|
||||
public void toHeaders(Headers source, Map<String, Object> headers) {
|
||||
super.toHeaders(source, headers);
|
||||
if (headers.size() > 0) {
|
||||
headers.put(BinderHeaders.NATIVE_HEADERS_PRESENT, Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
};
|
||||
String[] trustedPackages = extendedConsumerProperties.getExtension()
|
||||
.getTrustedPackages();
|
||||
if (!StringUtils.isEmpty(trustedPackages)) {
|
||||
headerMapper.addTrustedPackages(trustedPackages);
|
||||
//First, try to see if there is a bean named headerMapper registered by other frameworks using the binder (for e.g. spring cloud sleuth)
|
||||
try {
|
||||
mapper = getApplicationContext().getBean("kafkaBinderHeaderMapper", KafkaHeaderMapper.class);
|
||||
}
|
||||
catch (BeansException be) {
|
||||
BinderHeaderMapper headerMapper = new BinderHeaderMapper() {
|
||||
|
||||
@Override
|
||||
public void toHeaders(Headers source, Map<String, Object> headers) {
|
||||
super.toHeaders(source, headers);
|
||||
if (headers.size() > 0) {
|
||||
headers.put(BinderHeaders.NATIVE_HEADERS_PRESENT, Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
};
|
||||
String[] trustedPackages = extendedConsumerProperties.getExtension()
|
||||
.getTrustedPackages();
|
||||
if (!StringUtils.isEmpty(trustedPackages)) {
|
||||
headerMapper.addTrustedPackages(trustedPackages);
|
||||
}
|
||||
mapper = headerMapper;
|
||||
}
|
||||
mapper = headerMapper;
|
||||
}
|
||||
return mapper;
|
||||
}
|
||||
@@ -942,6 +966,7 @@ public class KafkaMessageChannelBinder extends
|
||||
protected MessageHandler getErrorMessageHandler(final ConsumerDestination destination,
|
||||
final String group,
|
||||
final ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
|
||||
|
||||
KafkaConsumerProperties kafkaConsumerProperties = properties.getExtension();
|
||||
if (kafkaConsumerProperties.isEnableDlq()) {
|
||||
KafkaProducerProperties dlqProducerProperties = kafkaConsumerProperties
|
||||
@@ -990,9 +1015,10 @@ public class KafkaMessageChannelBinder extends
|
||||
Headers kafkaHeaders = new RecordHeaders(record.headers().toArray());
|
||||
AtomicReference<ConsumerRecord<?, ?>> recordToSend = new AtomicReference<>(
|
||||
record);
|
||||
Throwable throwable = null;
|
||||
if (message.getPayload() instanceof Throwable) {
|
||||
|
||||
Throwable throwable = (Throwable) message.getPayload();
|
||||
throwable = (Throwable) message.getPayload();
|
||||
|
||||
HeaderMode headerMode = properties.getHeaderMode();
|
||||
|
||||
@@ -1056,12 +1082,22 @@ public class KafkaMessageChannelBinder extends
|
||||
String dlqName = StringUtils.hasText(kafkaConsumerProperties.getDlqName())
|
||||
? kafkaConsumerProperties.getDlqName()
|
||||
: "error." + record.topic() + "." + group;
|
||||
dlqSender.sendToDlq(recordToSend.get(), kafkaHeaders, dlqName);
|
||||
dlqSender.sendToDlq(recordToSend.get(), kafkaHeaders, dlqName, group, throwable,
|
||||
determinDlqPartitionFunction(properties.getExtension().getDlqPartitions()));
|
||||
};
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private DlqPartitionFunction determinDlqPartitionFunction(Integer dlqPartitions) {
|
||||
if (this.dlqPartitionFunction != null) {
|
||||
return this.dlqPartitionFunction;
|
||||
}
|
||||
else {
|
||||
return DlqPartitionFunction.determineFallbackFunction(dlqPartitions, this.logger);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MessageHandler getPolledConsumerErrorMessageHandler(
|
||||
ConsumerDestination destination, String group,
|
||||
@@ -1309,11 +1345,12 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
void sendToDlq(ConsumerRecord<?, ?> consumerRecord, Headers headers,
|
||||
String dlqName) {
|
||||
String dlqName, String group, Throwable throwable, DlqPartitionFunction partitionFunction) {
|
||||
K key = (K) consumerRecord.key();
|
||||
V value = (V) consumerRecord.value();
|
||||
ProducerRecord<K, V> producerRecord = new ProducerRecord<>(dlqName,
|
||||
consumerRecord.partition(), key, value, headers);
|
||||
partitionFunction.apply(group, consumerRecord, throwable),
|
||||
key, value, headers);
|
||||
|
||||
StringBuilder sb = new StringBuilder().append(" a message with key='")
|
||||
.append(toDisplayString(ObjectUtils.nullSafeToString(key), 50))
|
||||
|
||||
@@ -39,14 +39,17 @@ import org.springframework.cloud.stream.binder.kafka.properties.JaasLoginModuleC
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
|
||||
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
|
||||
import org.springframework.cloud.stream.config.ProducerMessageHandlerCustomizer;
|
||||
import org.springframework.context.ApplicationContext;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.context.annotation.Import;
|
||||
import org.springframework.integration.kafka.inbound.KafkaMessageSource;
|
||||
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
import org.springframework.kafka.security.jaas.KafkaJaasLoginModuleInitializer;
|
||||
import org.springframework.kafka.support.LoggingProducerListener;
|
||||
@@ -103,14 +106,18 @@ public class KafkaBinderConfiguration {
|
||||
KafkaTopicProvisioner provisioningProvider,
|
||||
@Nullable ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> listenerContainerCustomizer,
|
||||
@Nullable MessageSourceCustomizer<KafkaMessageSource<?, ?>> sourceCustomizer,
|
||||
ObjectProvider<KafkaBindingRebalanceListener> rebalanceListener) {
|
||||
@Nullable ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> messageHandlerCustomizer,
|
||||
ObjectProvider<KafkaBindingRebalanceListener> rebalanceListener,
|
||||
ObjectProvider<DlqPartitionFunction> dlqPartitionFunction) {
|
||||
|
||||
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
|
||||
configurationProperties, provisioningProvider,
|
||||
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique());
|
||||
listenerContainerCustomizer, sourceCustomizer, rebalanceListener.getIfUnique(),
|
||||
dlqPartitionFunction.getIfUnique());
|
||||
kafkaMessageChannelBinder.setProducerListener(this.producerListener);
|
||||
kafkaMessageChannelBinder
|
||||
.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
|
||||
kafkaMessageChannelBinder.setProducerMessageHandlerCustomizer(messageHandlerCustomizer);
|
||||
return kafkaMessageChannelBinder;
|
||||
}
|
||||
|
||||
|
||||
@@ -44,9 +44,9 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
@SpringBootTest(classes = { KafkaBinderConfiguration.class,
|
||||
BindingServiceConfiguration.class })
|
||||
@TestPropertySource(properties = {
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.admin.replication-factor=2",
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.admin.replicas-assignments.0=0,1",
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0",
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.topic.replication-factor=2",
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.topic.replicas-assignments.0=0,1",
|
||||
"spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0",
|
||||
"spring.cloud.stream.kafka.bindings.secondInput.consumer.topic.replication-factor=3",
|
||||
"spring.cloud.stream.kafka.bindings.secondInput.consumer.topic.replicas-assignments.0=0,1",
|
||||
"spring.cloud.stream.kafka.bindings.secondInput.consumer.topic.properties.message.format.version=0.9.1.0",
|
||||
@@ -60,19 +60,6 @@ public class AdminConfigTests {
|
||||
@Autowired
|
||||
private KafkaMessageChannelBinder binder;
|
||||
|
||||
@Test
|
||||
public void testDeprecatedAdminConfigurationToMapTopicProperties() {
|
||||
final KafkaConsumerProperties consumerProps = this.binder
|
||||
.getExtendedConsumerProperties("input");
|
||||
final KafkaTopicProperties kafkaTopicProperties = consumerProps.getTopic();
|
||||
|
||||
assertThat(kafkaTopicProperties.getReplicationFactor()).isEqualTo((short) 2);
|
||||
assertThat(kafkaTopicProperties.getReplicasAssignments().get(0))
|
||||
.isEqualTo(Arrays.asList(0, 1));
|
||||
assertThat(kafkaTopicProperties.getProperties().get("message.format.version"))
|
||||
.isEqualTo("0.9.0.0");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testConsumerTopicProperties() {
|
||||
final KafkaConsumerProperties consumerProperties = this.binder
|
||||
|
||||
@@ -159,7 +159,7 @@ public class KafkaBinderHealthIndicatorTest {
|
||||
@Test
|
||||
public void testIfNoTopicsRegisteredByTheBinderProvidesDownStatus() {
|
||||
Health health = indicator.health();
|
||||
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
|
||||
assertThat(health.getStatus()).isEqualTo(Status.UNKNOWN);
|
||||
}
|
||||
|
||||
private List<PartitionInfo> partitions(Node leader) {
|
||||
|
||||
@@ -52,6 +52,8 @@ import org.apache.kafka.clients.producer.ProducerRecord;
|
||||
import org.apache.kafka.clients.producer.RecordMetadata;
|
||||
import org.apache.kafka.common.KafkaFuture;
|
||||
import org.apache.kafka.common.errors.TopicExistsException;
|
||||
import org.apache.kafka.common.header.Headers;
|
||||
import org.apache.kafka.common.header.internals.RecordHeader;
|
||||
import org.apache.kafka.common.record.TimestampType;
|
||||
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
|
||||
import org.apache.kafka.common.serialization.ByteArraySerializer;
|
||||
@@ -63,6 +65,7 @@ import org.assertj.core.api.Assertions;
|
||||
import org.assertj.core.api.Condition;
|
||||
import org.junit.Before;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Ignore;
|
||||
import org.junit.Rule;
|
||||
import org.junit.Test;
|
||||
import org.junit.rules.ExpectedException;
|
||||
@@ -86,6 +89,7 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfi
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
|
||||
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
|
||||
import org.springframework.cloud.stream.config.BindingProperties;
|
||||
@@ -114,6 +118,7 @@ import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
|
||||
import org.springframework.kafka.listener.ContainerProperties;
|
||||
import org.springframework.kafka.listener.MessageListenerContainer;
|
||||
import org.springframework.kafka.support.Acknowledgment;
|
||||
import org.springframework.kafka.support.KafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaders;
|
||||
import org.springframework.kafka.support.SendResult;
|
||||
import org.springframework.kafka.support.TopicPartitionOffset;
|
||||
@@ -133,7 +138,6 @@ import org.springframework.messaging.support.ErrorMessage;
|
||||
import org.springframework.messaging.support.GenericMessage;
|
||||
import org.springframework.messaging.support.MessageBuilder;
|
||||
import org.springframework.util.Assert;
|
||||
import org.springframework.util.MimeType;
|
||||
import org.springframework.util.MimeTypeUtils;
|
||||
import org.springframework.util.concurrent.ListenableFuture;
|
||||
import org.springframework.util.concurrent.SettableListenableFuture;
|
||||
@@ -209,8 +213,16 @@ public class KafkaBinderTests extends
|
||||
return binder;
|
||||
}
|
||||
|
||||
private Binder getBinder(
|
||||
private KafkaTestBinder getBinder(
|
||||
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
|
||||
|
||||
return getBinder(kafkaBinderConfigurationProperties, null);
|
||||
}
|
||||
|
||||
private KafkaTestBinder getBinder(
|
||||
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
|
||||
DlqPartitionFunction dlqPartitionFunction) {
|
||||
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
kafkaBinderConfigurationProperties, new TestKafkaProperties());
|
||||
try {
|
||||
@@ -220,7 +232,7 @@ public class KafkaBinderTests extends
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
return new KafkaTestBinder(kafkaBinderConfigurationProperties,
|
||||
provisioningProvider);
|
||||
provisioningProvider, dlqPartitionFunction);
|
||||
}
|
||||
|
||||
private KafkaBinderConfigurationProperties createConfigurationProperties() {
|
||||
@@ -316,7 +328,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
@SuppressWarnings({ "rawtypes", "unchecked" })
|
||||
@Test
|
||||
public void testTrustedPackages() throws Exception {
|
||||
public void testDefaultHeaderMapper() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
|
||||
BindingProperties producerBindingProperties = createProducerBindingProperties(
|
||||
@@ -326,7 +338,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
consumerProperties.getExtension()
|
||||
.setTrustedPackages(new String[] { "org.springframework.util" });
|
||||
.setTrustedPackages(new String[] { "org.springframework.cloud.stream.binder.kafka" });
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
@@ -339,9 +351,92 @@ public class KafkaBinderTests extends
|
||||
consumerProperties);
|
||||
binderBindUnbindLatency();
|
||||
|
||||
final Pojo pojoHeader = new Pojo("testing");
|
||||
Message<?> message = org.springframework.integration.support.MessageBuilder
|
||||
.withPayload("foo")
|
||||
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.TEXT_PLAIN)
|
||||
.setHeader("foo", pojoHeader).build();
|
||||
|
||||
moduleOutputChannel.send(message);
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
AtomicReference<Message<byte[]>> inboundMessageRef = new AtomicReference<>();
|
||||
moduleInputChannel.subscribe(message1 -> {
|
||||
try {
|
||||
inboundMessageRef.set((Message<byte[]>) message1);
|
||||
}
|
||||
finally {
|
||||
latch.countDown();
|
||||
}
|
||||
});
|
||||
Assert.isTrue(latch.await(5, TimeUnit.SECONDS), "Failed to receive message");
|
||||
|
||||
Assertions.assertThat(inboundMessageRef.get()).isNotNull();
|
||||
Assertions.assertThat(inboundMessageRef.get().getPayload())
|
||||
.isEqualTo("foo".getBytes());
|
||||
Assertions
|
||||
.assertThat(inboundMessageRef.get().getHeaders()
|
||||
.get(MessageHeaders.CONTENT_TYPE))
|
||||
.isEqualTo(MimeTypeUtils.TEXT_PLAIN);
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo"))
|
||||
.isInstanceOf(Pojo.class);
|
||||
Pojo actual = (Pojo) inboundMessageRef.get().getHeaders().get("foo");
|
||||
Assertions.assertThat(actual.field).isEqualTo(pojoHeader.field);
|
||||
producerBinding.unbind();
|
||||
consumerBinding.unbind();
|
||||
}
|
||||
|
||||
@SuppressWarnings({ "rawtypes", "unchecked" })
|
||||
@Test
|
||||
public void testCustomHeaderMapper() throws Exception {
|
||||
|
||||
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
|
||||
binderConfiguration.setHeaderMapperBeanName("headerMapper");
|
||||
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
KafkaTestBinder binder = new KafkaTestBinder(binderConfiguration, kafkaTopicProvisioner);
|
||||
((GenericApplicationContext) binder.getApplicationContext()).registerBean("headerMapper",
|
||||
KafkaHeaderMapper.class, () -> new KafkaHeaderMapper() {
|
||||
@Override
|
||||
public void fromHeaders(MessageHeaders headers, Headers target) {
|
||||
target.add(new RecordHeader("custom-header", "foobar".getBytes()));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void toHeaders(Headers source, Map<String, Object> target) {
|
||||
if (source.headers("custom-header").iterator().hasNext()) {
|
||||
target.put("custom-header", source.headers("custom-header").iterator().next().value());
|
||||
}
|
||||
|
||||
}
|
||||
});
|
||||
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
|
||||
DirectChannel moduleOutputChannel = createBindableChannel("output",
|
||||
createProducerBindingProperties(producerProperties));
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
Binding<MessageChannel> producerBinding = binder.bindProducer("bar.0",
|
||||
moduleOutputChannel, producerProperties);
|
||||
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer("bar.0",
|
||||
"testSendAndReceiveNoOriginalContentType", moduleInputChannel,
|
||||
consumerProperties);
|
||||
binderBindUnbindLatency();
|
||||
|
||||
Message<?> message = org.springframework.integration.support.MessageBuilder
|
||||
.withPayload("foo")
|
||||
.setHeader("foo", MimeTypeUtils.TEXT_PLAIN).build();
|
||||
|
||||
moduleOutputChannel.send(message);
|
||||
@@ -360,16 +455,88 @@ public class KafkaBinderTests extends
|
||||
Assertions.assertThat(inboundMessageRef.get()).isNotNull();
|
||||
Assertions.assertThat(inboundMessageRef.get().getPayload())
|
||||
.isEqualTo("foo".getBytes());
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders()
|
||||
.get(BinderHeaders.BINDER_ORIGINAL_CONTENT_TYPE)).isNull();
|
||||
Assertions
|
||||
.assertThat(inboundMessageRef.get().getHeaders()
|
||||
.get(MessageHeaders.CONTENT_TYPE))
|
||||
.isEqualTo(MimeTypeUtils.TEXT_PLAIN);
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo"))
|
||||
.isInstanceOf(MimeType.class);
|
||||
MimeType actual = (MimeType) inboundMessageRef.get().getHeaders().get("foo");
|
||||
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN);
|
||||
.isNull();
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("custom-header"))
|
||||
.isEqualTo("foobar".getBytes());
|
||||
producerBinding.unbind();
|
||||
consumerBinding.unbind();
|
||||
}
|
||||
|
||||
@SuppressWarnings({ "rawtypes", "unchecked" })
|
||||
@Test
|
||||
public void testWellKnownHeaderMapperWithBeanNameKafkaHeaderMapper() throws Exception {
|
||||
|
||||
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
|
||||
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
KafkaTestBinder binder = new KafkaTestBinder(binderConfiguration, kafkaTopicProvisioner);
|
||||
((GenericApplicationContext) binder.getApplicationContext()).registerBean("kafkaBinderHeaderMapper",
|
||||
KafkaHeaderMapper.class, () -> new BinderHeaderMapper() {
|
||||
@Override
|
||||
public void fromHeaders(MessageHeaders headers, Headers target) {
|
||||
target.add(new RecordHeader("custom-header", "foobar".getBytes()));
|
||||
super.fromHeaders(headers, target);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void toHeaders(Headers source, Map<String, Object> target) {
|
||||
if (source.headers("custom-header").iterator().hasNext()) {
|
||||
target.put("custom-header", source.headers("custom-header").iterator().next().value());
|
||||
}
|
||||
|
||||
}
|
||||
});
|
||||
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
|
||||
DirectChannel moduleOutputChannel = createBindableChannel("output",
|
||||
createProducerBindingProperties(producerProperties));
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
Binding<MessageChannel> producerBinding = binder.bindProducer("bar.0",
|
||||
moduleOutputChannel, producerProperties);
|
||||
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer("bar.0",
|
||||
"testSendAndReceiveNoOriginalContentType", moduleInputChannel,
|
||||
consumerProperties);
|
||||
binderBindUnbindLatency();
|
||||
|
||||
Message<?> message = org.springframework.integration.support.MessageBuilder
|
||||
.withPayload("foo")
|
||||
.setHeader("foo", MimeTypeUtils.TEXT_PLAIN).build();
|
||||
|
||||
moduleOutputChannel.send(message);
|
||||
CountDownLatch latch = new CountDownLatch(1);
|
||||
AtomicReference<Message<byte[]>> inboundMessageRef = new AtomicReference<>();
|
||||
moduleInputChannel.subscribe(message1 -> {
|
||||
try {
|
||||
inboundMessageRef.set((Message<byte[]>) message1);
|
||||
}
|
||||
finally {
|
||||
latch.countDown();
|
||||
}
|
||||
});
|
||||
Assert.isTrue(latch.await(5, TimeUnit.SECONDS), "Failed to receive message");
|
||||
|
||||
Assertions.assertThat(inboundMessageRef.get()).isNotNull();
|
||||
Assertions.assertThat(inboundMessageRef.get().getPayload())
|
||||
.isEqualTo("foo".getBytes());
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo"))
|
||||
.isNull();
|
||||
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("custom-header"))
|
||||
.isEqualTo("foobar".getBytes());
|
||||
producerBinding.unbind();
|
||||
consumerBinding.unbind();
|
||||
}
|
||||
@@ -538,6 +705,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
@Ignore
|
||||
public void testDlqWithNativeSerializationEnabledOnDlqProducer() throws Exception {
|
||||
Binder binder = getBinder();
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
@@ -694,31 +862,44 @@ public class KafkaBinderTests extends
|
||||
|
||||
@Test
|
||||
public void testDlqAndRetry() throws Exception {
|
||||
testDlqGuts(true, null);
|
||||
testDlqGuts(true, null, null);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testDlq() throws Exception {
|
||||
testDlqGuts(false, null);
|
||||
testDlqGuts(false, null, 3);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testDlqNone() throws Exception {
|
||||
testDlqGuts(false, HeaderMode.none);
|
||||
testDlqGuts(false, HeaderMode.none, 1);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testDlqEmbedded() throws Exception {
|
||||
testDlqGuts(false, HeaderMode.embeddedHeaders);
|
||||
testDlqGuts(false, HeaderMode.embeddedHeaders, 3);
|
||||
}
|
||||
|
||||
private void testDlqGuts(boolean withRetry, HeaderMode headerMode) throws Exception {
|
||||
AbstractKafkaTestBinder binder = getBinder();
|
||||
private void testDlqGuts(boolean withRetry, HeaderMode headerMode, Integer dlqPartitions) throws Exception {
|
||||
int expectedDlqPartition = dlqPartitions == null ? 0 : dlqPartitions - 1;
|
||||
KafkaBinderConfigurationProperties binderConfig = createConfigurationProperties();
|
||||
DlqPartitionFunction dlqPartitionFunction;
|
||||
if (Integer.valueOf(1).equals(dlqPartitions)) {
|
||||
dlqPartitionFunction = null; // test that ZERO_PARTITION is used
|
||||
}
|
||||
else if (dlqPartitions == null) {
|
||||
dlqPartitionFunction = (group, rec, ex) -> 0;
|
||||
}
|
||||
else {
|
||||
dlqPartitionFunction = (group, rec, ex) -> dlqPartitions - 1;
|
||||
}
|
||||
AbstractKafkaTestBinder binder = getBinder(binderConfig, dlqPartitionFunction);
|
||||
|
||||
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
|
||||
producerProperties.getExtension()
|
||||
.setHeaderPatterns(new String[] { MessageHeaders.CONTENT_TYPE });
|
||||
producerProperties.setHeaderMode(headerMode);
|
||||
producerProperties.setPartitionCount(2);
|
||||
|
||||
DirectChannel moduleOutputChannel = createBindableChannel("output",
|
||||
createProducerBindingProperties(producerProperties));
|
||||
@@ -731,6 +912,8 @@ public class KafkaBinderTests extends
|
||||
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
|
||||
consumerProperties.setHeaderMode(headerMode);
|
||||
consumerProperties.setMultiplex(true);
|
||||
consumerProperties.getExtension().setDlqPartitions(dlqPartitions);
|
||||
consumerProperties.setConcurrency(2);
|
||||
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
@@ -750,8 +933,19 @@ public class KafkaBinderTests extends
|
||||
|
||||
MessageListenerContainer container = TestUtils.getPropertyValue(consumerBinding,
|
||||
"lifecycle.messageListenerContainer", MessageListenerContainer.class);
|
||||
assertThat(container.getContainerProperties().getTopicPartitions().length)
|
||||
.isEqualTo(2);
|
||||
assertThat(container.getContainerProperties().getTopicPartitionsToAssign().length)
|
||||
.isEqualTo(4); // 2 topics 2 partitions each
|
||||
|
||||
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
|
||||
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
|
||||
|
||||
Map<String, TopicDescription> topicDescriptions = admin.describeTopics(Collections.singletonList("error.dlqTest." + uniqueBindingId + ".0.testGroup"))
|
||||
.all()
|
||||
.get(10, TimeUnit.SECONDS);
|
||||
assertThat(topicDescriptions).hasSize(1);
|
||||
assertThat(topicDescriptions.values().iterator().next().partitions())
|
||||
.hasSize(dlqPartitions == null ? 2 : dlqPartitions);
|
||||
}
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> dlqConsumerProperties = createConsumerProperties();
|
||||
dlqConsumerProperties.setMaxAttempts(1);
|
||||
@@ -780,7 +974,9 @@ public class KafkaBinderTests extends
|
||||
binderBindUnbindLatency();
|
||||
String testMessagePayload = "test." + UUID.randomUUID().toString();
|
||||
Message<byte[]> testMessage = MessageBuilder
|
||||
.withPayload(testMessagePayload.getBytes()).build();
|
||||
.withPayload(testMessagePayload.getBytes())
|
||||
.setHeader(KafkaHeaders.PARTITION_ID, 1)
|
||||
.build();
|
||||
moduleOutputChannel.send(testMessage);
|
||||
|
||||
Message<?> receivedMessage = receive(dlqChannel, 3);
|
||||
@@ -795,7 +991,7 @@ public class KafkaBinderTests extends
|
||||
.isEqualTo(producerName);
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_PARTITION)).isEqualTo(0);
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_PARTITION)).isEqualTo(1);
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_OFFSET)).isEqualTo(0);
|
||||
@@ -815,6 +1011,8 @@ public class KafkaBinderTests extends
|
||||
.get(KafkaMessageChannelBinder.X_EXCEPTION_STACKTRACE)).isNotNull();
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_EXCEPTION_FQCN)).isNotNull();
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaHeaders.RECEIVED_PARTITION_ID)).isEqualTo(expectedDlqPartition);
|
||||
}
|
||||
else if (!HeaderMode.none.equals(headerMode)) {
|
||||
assertThat(handler.getInvocationCount())
|
||||
@@ -826,7 +1024,7 @@ public class KafkaBinderTests extends
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_PARTITION)).isEqualTo(
|
||||
ByteBuffer.allocate(Integer.BYTES).putInt(0).array());
|
||||
ByteBuffer.allocate(Integer.BYTES).putInt(1).array());
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_ORIGINAL_OFFSET)).isEqualTo(
|
||||
@@ -849,6 +1047,9 @@ public class KafkaBinderTests extends
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaMessageChannelBinder.X_EXCEPTION_FQCN)).isNotNull();
|
||||
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
.get(KafkaHeaders.RECEIVED_PARTITION_ID)).isEqualTo(expectedDlqPartition);
|
||||
}
|
||||
else {
|
||||
assertThat(receivedMessage.getHeaders()
|
||||
@@ -3273,7 +3474,7 @@ public class KafkaBinderTests extends
|
||||
private final class FailingInvocationCountingMessageHandler
|
||||
implements MessageHandler {
|
||||
|
||||
private int invocationCount;
|
||||
private volatile int invocationCount;
|
||||
|
||||
private final LinkedHashMap<Long, Message<?>> receivedMessages = new LinkedHashMap<>();
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
|
||||
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
@@ -38,12 +39,19 @@ import org.springframework.kafka.support.ProducerListener;
|
||||
*/
|
||||
public class KafkaTestBinder extends AbstractKafkaTestBinder {
|
||||
|
||||
@SuppressWarnings({ "rawtypes", "unchecked" })
|
||||
KafkaTestBinder(KafkaBinderConfigurationProperties binderConfiguration,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner) {
|
||||
|
||||
this(binderConfiguration, kafkaTopicProvisioner, null);
|
||||
}
|
||||
|
||||
@SuppressWarnings({ "rawtypes", "unchecked" })
|
||||
KafkaTestBinder(KafkaBinderConfigurationProperties binderConfiguration,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner, DlqPartitionFunction dlqPartitionFunction) {
|
||||
|
||||
try {
|
||||
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(
|
||||
binderConfiguration, kafkaTopicProvisioner) {
|
||||
binderConfiguration, kafkaTopicProvisioner, null, null, null, dlqPartitionFunction) {
|
||||
|
||||
/*
|
||||
* Some tests use multiple instance indexes for the same topic; we need to
|
||||
|
||||
@@ -41,9 +41,12 @@ import org.springframework.cloud.stream.binder.PollableMessageSource;
|
||||
import org.springframework.cloud.stream.binding.BindingService;
|
||||
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
|
||||
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
|
||||
import org.springframework.cloud.stream.config.ProducerMessageHandlerCustomizer;
|
||||
import org.springframework.cloud.stream.messaging.Processor;
|
||||
import org.springframework.cloud.stream.messaging.Sink;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.integration.kafka.inbound.KafkaMessageSource;
|
||||
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
@@ -58,6 +61,7 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
* @author Oleg Zhurakousky
|
||||
* @author Jon Schneider
|
||||
* @author Gary Russell
|
||||
*
|
||||
* @since 2.0
|
||||
*/
|
||||
@RunWith(SpringRunner.class)
|
||||
@@ -126,10 +130,18 @@ public class KafkaBinderActuatorTests {
|
||||
consumerBindings.get("source").get(0)).getPropertyValue(
|
||||
"lifecycle.beanName"))
|
||||
.isEqualTo("setByCustomizer:source");
|
||||
|
||||
Map<String, Binding<MessageChannel>> producerBindings = (Map<String, Binding<MessageChannel>>) channelBindingServiceAccessor
|
||||
.getPropertyValue("producerBindings");
|
||||
|
||||
assertThat(new DirectFieldAccessor(
|
||||
producerBindings.get("output")).getPropertyValue(
|
||||
"lifecycle.beanName"))
|
||||
.isEqualTo("setByCustomizer:output");
|
||||
});
|
||||
}
|
||||
|
||||
@EnableBinding({ Sink.class, PMS.class })
|
||||
@EnableBinding({ Processor.class, PMS.class })
|
||||
@EnableAutoConfiguration
|
||||
public static class KafkaMetricsTestConfig {
|
||||
|
||||
@@ -143,6 +155,11 @@ public class KafkaBinderActuatorTests {
|
||||
return (s, q, g) -> s.setBeanName("setByCustomizer:" + q);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> handlerCustomizer() {
|
||||
return (handler, destinationName) -> handler.setBeanName("setByCustomizer:" + destinationName);
|
||||
}
|
||||
|
||||
@StreamListener(Sink.INPUT)
|
||||
public void process(@SuppressWarnings("unused") String payload) throws InterruptedException {
|
||||
// Artificial slow listener to emulate consumer lag
|
||||
|
||||
Reference in New Issue
Block a user