Compare commits

..

33 Commits

Author SHA1 Message Date
Soby Chacko
74a044b0c0 2.1.0.M3 Release 2018-09-21 11:00:17 -04:00
Soby Chacko
70f385553a Application ID resolution for Kafka streams binder (#450)
* Application ID resolution for kafka streams binder

Avoid the need to rely on group property for application id.
First, check binding specific application id, if not look at defaults.
If nothing works, fall back to the default application id set by the
boot auto configuration.

Modify tests.
Update docs.

Resolves #448

* Addressing PR review comments

* Minor polishing

* Addressing PR review comments
2018-09-21 10:17:34 -04:00
Soby Chacko
f5739a9c7f Missing beans registration in Kafka Streams binder
There are duplicate code in various binder configurations where we register missing beans.
Consolidate them into a common class that implements ImportBeanDefinitionRegistrar and then
import this class in the binder configurations.

Resolves #445
2018-09-20 11:28:32 -04:00
Soby Chacko
39f5490488 Refactor bootstrap server configuration
There is a common piece of code repeated in both producer and consumer
configuration where it is populating the bootstrap server configuration.
Refactoring into a common method.

Resolves #208
2018-09-19 16:15:19 -04:00
Soby Chacko
d445a2bff9 Docs for GlobalKTable binding
Resolves #443
2018-09-19 15:05:48 -04:00
Oleg Zhurakousky
5446485cbf Updating with changes related to core GH-1484
Resolves #447
2018-09-19 10:45:04 +02:00
Soby Chacko
cc61d916b8 Extended default properties
Allow applications to configure default values for extended prducer and
consumer properties across multiple bindings in order to avoid repetition.

Address the changes in both Kafka and Kafka Streams binders.

Add integration test to verify both binding specific and default extended properties.

Resolves #444
Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1477
2018-09-17 20:09:44 -04:00
Soby Chacko
41d9136812 GH-384: Binding support for GlobalKTable
Resolves spring-cloud/spring-cloud-stream-binder-kafka#384

* New binding targets and binder implementation for GlobalKTable within kafka-streams binder
* Refactoring existing structure to accommodate the new binder
* Adding integration test to verify the GlobalKTable behavior

Resolves #384

* Addressing PR review comments

* Update spring-kafka to 2.2.0.M3
Addressing PR review comments
Polishing

* Addressing PR review comments

* Addressing PR review comments
2018-09-14 10:51:15 -04:00
Soby Chacko
71a6a6cf28 Package structure changes for StreamBuilderFactoryBean 2018-09-12 12:05:52 -04:00
Gary Russell
36361d19bf GH-1459: Pollable Consumer and Requeue
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1459

Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1467
2018-09-10 10:22:04 -04:00
Soby Chacko
e869516e3d Handling tombstones in kafka streams binder
Handle tombstones gracefully in the kafka streams binder
Modify tests to verify

Resolves spring-cloud/spring-cloud-stream-binder-kafka#294

* Addressing PR review comments

* Addressing PR review comments
2018-09-05 12:17:54 -04:00
Gary Russell
ea288c62eb GH-435: useNativeEncoding and Transactions
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/435

All common properties should be settable on the transactional producer.
2018-09-04 15:56:26 -04:00
Soby Chacko
afa337f718 Remove deprecations in tests
Resolves #433
2018-09-04 15:28:36 -04:00
Soby Chacko
e4b08e888e Next update version: 2.1.0.BUILD-SNAPSHOT 2018-08-28 10:15:08 -04:00
Soby Chacko
de4b5a9443 2.1.0.M2 Release 2018-08-28 10:04:25 -04:00
Soby Chacko
a090614709 kafka Streams binder configuration fix
(to address core changes made in regards to Boot 2.1)

Ensure that KafkaStreamsBinderSupportAutoConfiguration runs
after BindingServiceConfiguration from core.
2018-08-14 10:33:51 -04:00
Gary Russell
a3f7ca9756 Fix mock to use poll(Duration) 2018-08-07 14:45:33 -04:00
Soby Chacko
82b07a0120 Fixing checkstyle issues 2018-08-07 14:23:47 -04:00
Soby Chacko
0d0cf8dcb7 Update to spring-cloud-build 2.1.0 snapshot
Spring Boot 2.1.0 snapshot
Spring Kafka 2.2.0/3.1.0 snapshots
Apache Kafka client 2.0.0
Fixing tests
Removing deprecations and removals
Polishing

Resolves #424
2018-08-07 14:02:52 -04:00
Soby Chacko
cbfe03be2f Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-03 08:11:52 -04:00
Soby Chacko
1f0c6cabc6 Revert "Kafka Streams binder test disabling"
This reverts commit 0c61cedc85.
2018-08-03 08:10:59 -04:00
Soby Chacko
0c61cedc85 Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-02 12:59:02 -04:00
Soby Chacko
44210f1b72 Update Kafka binder metrics docs
Fix the wrong metric name used in the Kafka binder metrics for consumer offset lag.
Update the description.

Resolves #422
2018-08-02 09:57:27 -04:00
Soby Chacko
7007f9494a JAAS initializer regression
Fix JAAS initializer with setting the missing properties.

Resoves #419

Polishing
2018-07-27 14:34:17 -04:00
Gary Russell
da268bb6dd GH-309: Use actual partition count
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/309

If more partitions exist than those configured, use the actual.

Resolves #416
2018-07-25 20:31:27 +02:00
Soby Chacko
e03baefbaf Kafka streams binder issues in custom environment
When kafka streams binder is used in the context of a custom environment, possibly
for a multi binder use case, it is unable to query the outer context as the normal
parent context is absent. This change will ensure that the binder context has access
to the outer context so that it can use any beans it needs from it in KStream or KTable
binder configuration.

Resolves #411
2018-07-25 13:14:47 -04:00
Gary Russell
01396b6573 GH-413: Configure Kafka Streams Cleanup Behavior
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/413
2018-07-24 16:53:57 -04:00
Soby Chacko
3f009a8267 Autoconfigure optimization
Add spring-boot-autoconfigure-processor to the kafka-streams binder for
auto configuration optimization.

Resolves #406
2018-07-24 15:56:48 -04:00
Gary Russell
31bb86b002 GH-404: Synchronize shared consumer
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/404

The fix for issue https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/231
added a shared consumer but the consumer is not thread safe. Add synchronization.

Also, a timeout was added to the `KafkaBinderHealthIndicator` but not to the
`KafkaBinderMetrics` which has a similar shared consumer; add a timeout there.
2018-07-11 13:30:24 -04:00
Soby Chacko
cc2dfd1d08 Upgrade kafka client to 1.1.0 (#405)
* Upgrade kafka client to 1.1.0

Upgrade kafka client to 1.1.0 for both kafka and kafka-streams binders

Resolves #370

* Address review comments
2018-07-09 17:37:58 -04:00
jmaxwell
fc768ba695 GH-402 Add additional data to DLQ message headers 2018-07-02 11:34:46 -04:00
Soby Chacko
38d6deb4d5 Provide programmatic access to KafkaStreams object
Providing access to the underlying StreamBuilderFactoryBean by making the bean name
deterministic. Eariler, the binder was using UUID to make the stream builder factory
bean names unique in the event of multiple StreamListeners. Switching to use the
method name instead to keep the StreamBuilder factory beans unique while providing
a deterministic way to giving it programmatic access.

Polishing docs

Fixes #396
2018-06-29 16:45:55 -04:00
Soby Chacko
2a0b9015de Next update version: 2.1.0.BUILD-SNAPSHOT 2018-06-27 14:11:40 -04:00
59 changed files with 2210 additions and 541 deletions

40
pom.xml
View File

@@ -2,20 +2,20 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.0.2.RELEASE</version>
<version>2.1.0.M2</version>
<relativePath />
</parent>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.1.5.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.0.3.RELEASE</spring-integration-kafka.version>
<kafka.version>1.0.1</kafka.version>
<spring-cloud-stream.version>2.1.0.M1</spring-cloud-stream.version>
<spring-kafka.version>2.2.0.M3</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.M1</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>2.1.0.M3</spring-cloud-stream.version>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka</module>
@@ -47,6 +47,13 @@
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
@@ -101,6 +108,27 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</dependencyManagement>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -21,12 +21,22 @@ import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.beans.BeansException;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
import org.springframework.cloud.stream.config.MergableProperties;
import org.springframework.expression.Expression;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
@@ -125,6 +135,10 @@ public class KafkaBinderConfigurationProperties {
this.kafkaProperties = kafkaProperties;
}
public KafkaProperties getKafkaProperties() {
return kafkaProperties;
}
public Transaction getTransaction() {
return this.transaction;
}
@@ -515,21 +529,7 @@ public class KafkaBinderConfigurationProperties {
consumerConfiguration.putAll(this.consumerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(consumerConfiguration.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
consumerConfiguration.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = consumerConfiguration.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) consumerConfiguration
.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
consumerConfiguration.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(consumerConfiguration);
return getConfigurationWithBootstrapServer(consumerConfiguration, ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
/**
@@ -550,21 +550,25 @@ public class KafkaBinderConfigurationProperties {
producerConfiguration.putAll(this.producerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(producerConfiguration.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
producerConfiguration.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
return getConfigurationWithBootstrapServer(producerConfiguration, ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private Map<String, Object> getConfigurationWithBootstrapServer(Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = producerConfiguration.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
Object boostrapServersConfig = configuration.get(bootstrapServersConfig);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) producerConfiguration
.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
producerConfiguration.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(producerConfiguration);
return Collections.unmodifiableMap(configuration);
}
public JaasLoginModuleConfiguration getJaas() {
@@ -585,7 +589,7 @@ public class KafkaBinderConfigurationProperties {
public static class Transaction {
private final KafkaProducerProperties producer = new KafkaProducerProperties();
private final CombinedProducerProperties producer = new CombinedProducerProperties();
private String transactionIdPrefix;
@@ -597,10 +601,183 @@ public class KafkaBinderConfigurationProperties {
this.transactionIdPrefix = transactionIdPrefix;
}
public KafkaProducerProperties getProducer() {
public CombinedProducerProperties getProducer() {
return this.producer;
}
}
/**
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties}
* so that common and kafka-specific properties can be set for the transactional
* producer.
* @since 2.1
*/
public static class CombinedProducerProperties {
private final ProducerProperties producerProperties = new ProducerProperties();
private final KafkaProducerProperties kafkaProducerProperties = new KafkaProducerProperties();
public void merge(MergableProperties mergable) {
this.producerProperties.merge(mergable);
}
public Expression getPartitionKeyExpression() {
return this.producerProperties.getPartitionKeyExpression();
}
public void setPartitionKeyExpression(Expression partitionKeyExpression) {
this.producerProperties.setPartitionKeyExpression(partitionKeyExpression);
}
public boolean isPartitioned() {
return this.producerProperties.isPartitioned();
}
public void copyProperties(Object source, Object target) throws BeansException {
this.producerProperties.copyProperties(source, target);
}
public Expression getPartitionSelectorExpression() {
return this.producerProperties.getPartitionSelectorExpression();
}
public void setPartitionSelectorExpression(Expression partitionSelectorExpression) {
this.producerProperties.setPartitionSelectorExpression(partitionSelectorExpression);
}
public @Min(value = 1, message = "Partition count should be greater than zero.") int getPartitionCount() {
return this.producerProperties.getPartitionCount();
}
public void setPartitionCount(int partitionCount) {
this.producerProperties.setPartitionCount(partitionCount);
}
public String[] getRequiredGroups() {
return this.producerProperties.getRequiredGroups();
}
public void setRequiredGroups(String... requiredGroups) {
this.producerProperties.setRequiredGroups(requiredGroups);
}
public @AssertTrue(message = "Partition key expression and partition key extractor class properties are mutually exclusive.") boolean isValidPartitionKeyProperty() {
return this.producerProperties.isValidPartitionKeyProperty();
}
public @AssertTrue(message = "Partition selector class and partition selector expression properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
return this.producerProperties.isValidPartitionSelectorProperty();
}
public HeaderMode getHeaderMode() {
return this.producerProperties.getHeaderMode();
}
public void setHeaderMode(HeaderMode headerMode) {
this.producerProperties.setHeaderMode(headerMode);
}
public boolean isUseNativeEncoding() {
return this.producerProperties.isUseNativeEncoding();
}
public void setUseNativeEncoding(boolean useNativeEncoding) {
this.producerProperties.setUseNativeEncoding(useNativeEncoding);
}
public boolean isErrorChannelEnabled() {
return this.producerProperties.isErrorChannelEnabled();
}
public void setErrorChannelEnabled(boolean errorChannelEnabled) {
this.producerProperties.setErrorChannelEnabled(errorChannelEnabled);
}
public String getPartitionKeyExtractorName() {
return this.producerProperties.getPartitionKeyExtractorName();
}
public void setPartitionKeyExtractorName(String partitionKeyExtractorName) {
this.producerProperties.setPartitionKeyExtractorName(partitionKeyExtractorName);
}
public String getPartitionSelectorName() {
return this.producerProperties.getPartitionSelectorName();
}
public void setPartitionSelectorName(String partitionSelectorName) {
this.producerProperties.setPartitionSelectorName(partitionSelectorName);
}
public int getBufferSize() {
return this.kafkaProducerProperties.getBufferSize();
}
public void setBufferSize(int bufferSize) {
this.kafkaProducerProperties.setBufferSize(bufferSize);
}
public @NotNull CompressionType getCompressionType() {
return this.kafkaProducerProperties.getCompressionType();
}
public void setCompressionType(CompressionType compressionType) {
this.kafkaProducerProperties.setCompressionType(compressionType);
}
public boolean isSync() {
return this.kafkaProducerProperties.isSync();
}
public void setSync(boolean sync) {
this.kafkaProducerProperties.setSync(sync);
}
public int getBatchTimeout() {
return this.kafkaProducerProperties.getBatchTimeout();
}
public void setBatchTimeout(int batchTimeout) {
this.kafkaProducerProperties.setBatchTimeout(batchTimeout);
}
public Expression getMessageKeyExpression() {
return this.kafkaProducerProperties.getMessageKeyExpression();
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
this.kafkaProducerProperties.setMessageKeyExpression(messageKeyExpression);
}
public String[] getHeaderPatterns() {
return this.kafkaProducerProperties.getHeaderPatterns();
}
public void setHeaderPatterns(String[] headerPatterns) {
this.kafkaProducerProperties.setHeaderPatterns(headerPatterns);
}
public Map<String, String> getConfiguration() {
return this.kafkaProducerProperties.getConfiguration();
}
public void setConfiguration(Map<String, String> configuration) {
this.kafkaProducerProperties.setConfiguration(configuration);
}
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
}
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
}
public KafkaProducerProperties getExtension() {
return this.kafkaProducerProperties;
}
}
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -16,10 +16,13 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
public class KafkaBindingProperties {
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider{
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();

View File

@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.cloud.stream.config.MergableProperties;
/**
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
@@ -29,7 +31,7 @@ import java.util.Map;
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
* </p>
*/
public class KafkaConsumerProperties {
public class KafkaConsumerProperties implements MergableProperties {
public enum StartOffset {
earliest(-2L),

View File

@@ -20,16 +20,20 @@ import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.ExtendedBindingProperties;
/**
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka")
public class KafkaExtendedBindingProperties
implements ExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.default";
private Map<String, KafkaBindingProperties> bindings = new HashMap<>();
public Map<String, KafkaBindingProperties> getBindings() {
@@ -82,4 +86,14 @@ public class KafkaExtendedBindingProperties
}
}
@Override
public String getDefaultsPrefix() {
return DEFAULTS_PREFIX;
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaBindingProperties.class;
}
}

View File

@@ -21,6 +21,7 @@ import java.util.Map;
import javax.validation.constraints.NotNull;
import org.springframework.cloud.stream.config.MergableProperties;
import org.springframework.expression.Expression;
/**
@@ -28,7 +29,7 @@ import org.springframework.expression.Expression;
* @author Henryk Konsek
* @author Gary Russell
*/
public class KafkaProducerProperties {
public class KafkaProducerProperties implements MergableProperties {
private int bufferSize = 16384;

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>

View File

@@ -1,6 +1,6 @@
== Usage
For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following
For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following
Maven coordinates:
[source,xml]
@@ -13,26 +13,28 @@ Maven coordinates:
== Kafka Streams Binder Overview
Spring Cloud Stream's Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka
Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the
Spring Cloud Stream's Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka
Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the
https://kafka.apache.org/documentation/streams/developer-guide[Apache Kafka Streams] APIs in the core business logic.
Kafka Streams binder implementation builds on the foundation provided by the http://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafka Streams in Spring Kafka]
Kafka Streams binder implementation builds on the foundation provided by the http://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafka Streams in Spring Kafka]
project.
As part of this native integration, the high-level https://docs.confluent.io/current/streams/developer-guide/dsl-api.html[Streams DSL]
provided by the Kafka Streams API is available for use in the business logic, too.
Kafka Streams binder provides binding capabilities for the three major types in Kafka Streams - KStream, KTable and GlobalKTable.
An early version of the https://docs.confluent.io/current/streams/developer-guide/processor-api.html[Processor API]
As part of this native integration, the high-level https://docs.confluent.io/current/streams/developer-guide/dsl-api.html[Streams DSL]
provided by the Kafka Streams API is available for use in the business logic.
An early version of the https://docs.confluent.io/current/streams/developer-guide/processor-api.html[Processor API]
support is available as well.
As noted early-on, Kafka Streams support in Spring Cloud Stream strictly only available for use in the Processor model.
A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages
As noted early-on, Kafka Streams support in Spring Cloud Stream is strictly only available for use in the Processor model.
A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages
can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.
=== Streams DSL
This application consumes data from a Kafka topic (e.g., `words`), computes word count for each unique word in a 5 seconds
This application consumes data from a Kafka topic (e.g., `words`), computes word count for each unique word in a 5 seconds
time window, and the computed results are sent to a downstream topic (e.g., `counts`) for further processing.
[source]
@@ -65,12 +67,12 @@ Once built as a uber-jar (e.g., `wordcount-processor.jar`), you can run the abov
java -jar wordcount-processor.jar --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts
----
This application will consume messages from the Kafka topic `words` and the computed results are published to an output
This application will consume messages from the Kafka topic `words` and the computed results are published to an output
topic `counts`.
Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are automatically bound as
KStream objects. As a developer, you can exclusively focus on the business aspects of the code, i.e. writing the logic
required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure
Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are automatically bound as
KStream objects. As a developer, you can exclusively focus on the business aspects of the code, i.e. writing the logic
required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure
is automatically handled by the framework.
== Configuration Options
@@ -81,7 +83,7 @@ For common configuration options and properties pertaining to binder, refer to t
=== Kafka Streams Properties
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
literal.
configuration::
@@ -96,7 +98,7 @@ spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.a
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
----
For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs in
For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs in
Apache Kafka Streams docs.
brokers::
@@ -113,14 +115,13 @@ serdeError::
+
Default: `logAndFail`
applicationId::
Application ID for all the stream configurations in the current application context.
You can override the application id for an individual `StreamListener` method using the `group` property on the binding.
You have to ensure that you are using the same group name for all input bindings in the case of multiple inputs on the same methods.
Convenient way to set the application.id for the Kafka Streams application globally at the binder level.
If the application contains multiple `StreamListener` methods, then application.id should be set at the binding level per input binding.
+
Default: `default`
Default: `none`
The following properties are _only_ available for Kafka Streams producers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.`
literal.
The following properties are _only_ available for Kafka Streams producers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.` literal.
For convenience, if there multiple output bindings and they all require a common value, that can be configured by using the prefix `spring.cloud.stream.kafka.streams.default.producer.`.
keySerde::
key serde to use
@@ -135,9 +136,13 @@ useNativeEncoding::
+
Default: `false`.
The following properties are _only_ available for Kafka Streams consumers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer.`
literal.
The following properties are _only_ available for Kafka Streams consumers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer.`literal.
For convenience, if there multiple input bindings and they all require a common value, that can be configured by using the prefix `spring.cloud.stream.kafka.streams.default.consumer.`.
applicationId::
Setting application.id per input binding.
+
Default: `none`
keySerde::
key serde to use
+
@@ -176,8 +181,8 @@ Default: `none`.
== Multiple Input Bindings
For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka
Streams binder provides multiple bindings support.
For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka
Streams binder provides multiple bindings support.
Let's see it in action.
@@ -206,11 +211,11 @@ interface KStreamKTableBinding {
----
In the above example, the application is written as a sink, i.e. there are no output bindings and the application has to
decide concerning downstream processing. When you write applications in this style, you might want to send the information
In the above example, the application is written as a sink, i.e. there are no output bindings and the application has to
decide concerning downstream processing. When you write applications in this style, you might want to send the information
downstream or store them in a state store (See below for Queryable State Stores).
In the case of incoming KTable, if you want to materialize the computations to a state store, you have to express it
In the case of incoming KTable, if you want to materialize the computations to a state store, you have to express it
through the following property.
[source]
@@ -218,6 +223,12 @@ through the following property.
spring.cloud.stream.kafka.streams.bindings.inputTable.consumer.materializedAs: all-songs
----
The above example shows the use of KTable as an input binding.
The binder also supports input bindings for GlobalKTable.
GlobalKTable binding is useful when you have to ensure that all instances of your application has access to the data updates from the topic.
KTable and GlobalKTable bindings are only available on the input.
Binder supports both input and output bindings for KStream.
=== Multiple Input Bindings as a Processor
[source]
@@ -244,13 +255,13 @@ interface KStreamKTableBinding extends KafkaStreamsProcessor {
== Multiple Output Bindings (aka Branching)
Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides
Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides
support for this feature without compromising the programming model exposed through `StreamListener` in the end user application.
You can write the application in the usual way as demonstrated above in the word count example. However, when using the
branching feature, you are required to do a few things. First, you need to make sure that your return type is `KStream[]`
instead of a regular `KStream`. Second, you need to use the `SendTo` annotation containing the output bindings in the order
(see example below). For each of these output bindings, you need to configure destination, content-type etc., complying with
You can write the application in the usual way as demonstrated above in the word count example. However, when using the
branching feature, you are required to do a few things. First, you need to make sure that your return type is `KStream[]`
instead of a regular `KStream`. Second, you need to use the `SendTo` annotation containing the output bindings in the order
(see example below). For each of these output bindings, you need to configure destination, content-type etc., complying with
the standard Spring Cloud Stream expectations.
Here is an example:
@@ -330,21 +341,21 @@ spring.cloud.stream.bindings.input:
== Message Conversion
Similar to message-channel based binder applications, the Kafka Streams binder adapts to the out-of-the-box content-type
Similar to message-channel based binder applications, the Kafka Streams binder adapts to the out-of-the-box content-type
conversions without any compromise.
It is typical for Kafka Streams operations to know the type of SerDes used to transform the key and value correctly.
Therefore, it may be more natural to rely on the SerDe facilities provided by the Apache Kafka Streams library itself at
Therefore, it may be more natural to rely on the SerDe facilities provided by the Apache Kafka Streams library itself at
the inbound and outbound conversions rather than using the content-type conversions offered by the framework.
On the other hand, you might be already familiar with the content-type conversion patterns provided by the framework, and
On the other hand, you might be already familiar with the content-type conversion patterns provided by the framework, and
that, you'd like to continue using for inbound and outbound conversions.
Both the options are supported in the Kafka Streams binder implementation.
Both the options are supported in the Kafka Streams binder implementation.
==== Outbound serialization
If native encoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the outbound
If native encoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the outbound
in this case for outbound serialization.
Here is the property to set the contentType on the outbound.
@@ -361,7 +372,7 @@ Here is the property to enable native encoding.
spring.cloud.stream.bindings.output.nativeEncoding: true
----
If native encoding is enabled on the output binding (user has to enable it as above explicitly), then the framework will
If native encoding is enabled on the output binding (user has to enable it as above explicitly), then the framework will
skip any form of automatic message conversion on the outbound. In that case, it will switch to the Serde set by the user.
The `valueSerde` property set on the actual output binding will be used. Here is an example.
@@ -372,7 +383,7 @@ spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde: org.apach
If this property is not set, then it will use the "default" SerDe: `spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde`.
It is worth to mention that Kafka Streams binder does not serialize the keys on outbound - it simply relies on Kafka itself.
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
`keySerde`.
Binding level key serde:
@@ -418,9 +429,9 @@ spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=StringSer
spring.cloud.stream.kafka.streams.bindings.output3.producer.valueSerde=JsonSerde
----
Then if you have `SendTo` like this, @SendTo({"output1", "output2", "output3"}), the `KStream[]` from the branches are
applied with proper SerDe objects as defined above. If you are not enabling `nativeEncoding`, you can then set different
contentType values on the output bindings as below. In that case, the framework will use the appropriate message converter
Then if you have `SendTo` like this, @SendTo({"output1", "output2", "output3"}), the `KStream[]` from the branches are
applied with proper SerDe objects as defined above. If you are not enabling `nativeEncoding`, you can then set different
contentType values on the output bindings as below. In that case, the framework will use the appropriate message converter
to convert the messages before sending to Kafka.
[source]
@@ -434,8 +445,8 @@ spring.cloud.stream.bindings.output3.contentType: application/octet-stream
Similar rules apply to data deserialization on the inbound.
If native decoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the inbound
If native decoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the inbound
in this case for inbound deserialization.
Here is the property to set the contentType on the inbound.
@@ -452,8 +463,8 @@ Here is the property to enable native decoding.
spring.cloud.stream.bindings.input.nativeDecoding: true
----
If native decoding is enabled on the input binding (user has to enable it as above explicitly), then the framework will
skip doing any message conversion on the inbound. In that case, it will switch to the SerDe set by the user. The `valueSerde`
If native decoding is enabled on the input binding (user has to enable it as above explicitly), then the framework will
skip doing any message conversion on the inbound. In that case, it will switch to the SerDe set by the user. The `valueSerde`
property set on the actual output binding will be used. Here is an example.
[source]
@@ -464,7 +475,7 @@ spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde: org.apache
If this property is not set, it will use the default SerDe: `spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde`.
It is worth to mention that Kafka Streams binder does not deserialize the keys on inbound - it simply relies on Kafka itself.
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
`keySerde`.
Binding level key serde:
@@ -481,8 +492,8 @@ Common Key serde:
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde
----
As in the case of KStream branching on the outbound, the benefit of setting value SerDe per binding is that if you have
multiple input bindings (multiple KStreams object) and they all require separate value SerDe's, then you can configure
As in the case of KStream branching on the outbound, the benefit of setting value SerDe per binding is that if you have
multiple input bindings (multiple KStreams object) and they all require separate value SerDe's, then you can configure
them individually. If you use the common configuration approach, then this feature won't be applicable.
== Error Handling
@@ -490,7 +501,7 @@ them individually. If you use the common configuration approach, then this featu
Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors.
For details on this support, please see https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers[this]
Out of the box, Apache Kafka Streams provide two kinds of deserialization exception handlers - `logAndContinue` and `logAndFail`.
As the name indicates, the former will log the error and continue processing the next records and the latter will log the
As the name indicates, the former will log the error and continue processing the next records and the latter will log the
error and fail. `LogAndFail` is the default deserialization exception handler.
=== Handling Deserialization Exceptions
@@ -502,7 +513,7 @@ Kafka Streams binder supports a selection of exception handlers through the foll
spring.cloud.stream.kafka.streams.binder.serdeError: logAndContinue
----
In addition to the above two deserialization exception handlers, the binder also provides a third one for sending the erroneous
In addition to the above two deserialization exception handlers, the binder also provides a third one for sending the erroneous
records (poison pills) to a DLQ topic. Here is how you enable this DLQ exception handler.
[source]
@@ -516,27 +527,27 @@ When the above property is set, all the deserialization error records are automa
spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName: foo-dlq
----
If this is set, then the error records are sent to the topic `foo-dlq`. If this is not set, then it will create a DLQ
If this is set, then the error records are sent to the topic `foo-dlq`. If this is not set, then it will create a DLQ
topic with the name `error.<input-topic-name>.<group-name>`.
A couple of things to keep in mind when using the exception handling feature in Kafka Streams binder.
* The property `spring.cloud.stream.kafka.streams.binder.serdeError` is applicable for the entire application. This implies
* The property `spring.cloud.stream.kafka.streams.binder.serdeError` is applicable for the entire application. This implies
that if there are multiple `StreamListener` methods in the same application, this property is applied to all of them.
* The exception handling for deserialization works consistently with native deserialization and framework provided message
* The exception handling for deserialization works consistently with native deserialization and framework provided message
conversion.
=== Handling Non-Deserialization Exceptions
For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors.
As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get
As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get
access to the DLQ sending bean directly from your application.
Once you get access to that bean, you can programmatically send any exception records from your application to the DLQ.
It continues to remain hard to robust error handling using the high-level DSL; Kafka Streams doesn't natively support error
handling yet.
It continues to remain hard to robust error handling using the high-level DSL; Kafka Streams doesn't natively support error
handling yet.
However, when you use the low-level Processor API in your application, there are options to control this behavior. See
However, when you use the low-level Processor API in your application, there are options to control this behavior. See
below.
[source]
@@ -652,3 +663,23 @@ else {
//query from the remote host
}
----
== Accessing the underlying KafkaStreams object
`StreamBuilderFactoryBean` from spring-kafka that is responsible for constructing the `KafkaStreams` object can be accessed programmatically.
Each `StreamBuilderFactoryBean` is registered as `stream-builder` and appended with the `StreamListener` method name.
If your `StreamListener` method is named as `process` for example, the stream builder bean is named as `stream-builder-process`.
Since this is a factory bean, it should be accessed by prepending an ampersand (`&`) when accessing it programmatically.
Following is an example and it assumes the `StreamListener` method is named as `process`
[source]
----
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
----
== State Cleanup
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.

View File

@@ -488,6 +488,6 @@ You can consume these exceptions with your own Spring Integration flow.
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.someGroup.someTopic.lag`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
For example, if the value of the metric `spring.cloud.stream.binder.kafka.myGroup.myTopic.lag` is `1000`, the consumer group named `myGroup` has `1000` messages waiting to be consumed from the topic calle `myTopic`.
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
</parent>
<dependencies>
@@ -45,11 +45,6 @@
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
</dependency>
<!-- Added back since Kafka still depends on it, but it has been removed by Boot due to EOL -->
<dependency>
<groupId>log4j</groupId>
@@ -62,5 +57,24 @@
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure-processor</artifactId>
<optional>true</optional>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
</dependencies>
</project>
</project>

View File

@@ -0,0 +1,104 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.util.StringUtils;
/**
* An {@link AbstractBinder} implementation for {@link GlobalKTable}.
*
* Provides only consumer binding for the bound {@link GlobalKTable}.
* Output bindings are not allowed on this binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBinder extends
AbstractBinder<GlobalKTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
public GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name, String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name, GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for GlobalKTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
throw new UnsupportedOperationException("No producer binding is allowed and therefore no properties");
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -0,0 +1,48 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* @author Soby Chacko
* @since 2.1.0
*/
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class GlobalKTableBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new GlobalKTableBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue);
}
}

View File

@@ -0,0 +1,93 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link GlobalKTable}
*
* Input bindings are only created as output bindings on GlobalKTable are not allowed.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
}
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper= new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class, GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
}
@Override
public GlobalKTable createOutput(String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type GlobalKTable");
}
public interface GlobalKTableWrapper {
void wrap(GlobalKTable<Object, Object> delegate);
}
private static class GlobalKTableWrapperHandler implements GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private GlobalKTable<Object, Object> delegate;
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTable.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only GlobalKTable method invocations are permitted");
}
}
}
}

View File

@@ -19,18 +19,16 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
@@ -62,7 +60,7 @@ class KStreamBinder extends
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KeyValueSerdeResolver keyValueSerdeResolver;
@@ -74,7 +72,7 @@ class KStreamBinder extends
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
}
@@ -83,42 +81,15 @@ class KStreamBinder extends
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
this.KafkaStreamsBindingInformationCatalogue.registerConsumerProperties(inputTarget, properties.getExtension());
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(inputTarget, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
this.kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
StreamsConfig streamsConfig = this.KafkaStreamsBindingInformationCatalogue.getStreamsConfig(inputTarget);
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
extendedConsumerProperties.getExtension());
}
SendToDlqAndContinue sendToDlqAndContinue = this.getApplicationContext().getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
DeserializationExceptionHandler deserializationExceptionHandler = streamsConfig.defaultDeserializationExceptionHandler();
if (deserializationExceptionHandler instanceof SendToDlqAndContinue) {
((SendToDlqAndContinue) deserializationExceptionHandler).addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
}
}
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@@ -162,4 +133,14 @@ class KStreamBinder extends
public void setKafkaStreamsExtendedBindingProperties(KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -16,10 +16,11 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
@@ -27,6 +28,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.core.type.AnnotationMetadata;
/**
* @author Marius Bogoevici
@@ -34,18 +37,50 @@ import org.springframework.context.annotation.Configuration;
* @author Soby Chacko
*/
@Configuration
@Import({KafkaAutoConfiguration.class, KStreamBinderConfiguration.KStreamMissingBeansRegistrar.class})
public class KStreamBinderConfiguration {
private static final Log logger = LogFactory.getLog(KStreamBinderConfiguration.class);
static class KStreamMissingBeansRegistrar extends KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar {
@Autowired
private KafkaProperties kafkaProperties;
private static final String BEAN_NAME = "outerContext";
@Autowired
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
super.registerBeanDefinitions(importingClassMetadata, registry);
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition converstionDelegateBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsMessageConversionDelegate.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsMessageConversionDelegate.class.getSimpleName(), converstionDelegateBean);
AbstractBeanDefinition keyValueSerdeResolverBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KeyValueSerdeResolver.class)
.getBeanDefinition();
registry.registerBeanDefinition(KeyValueSerdeResolver.class.getSimpleName(), keyValueSerdeResolverBean);
AbstractBeanDefinition kafkaStreamsExtendedBindingPropertiesBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsExtendedBindingProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsExtendedBindingProperties.class.getSimpleName(), kafkaStreamsExtendedBindingPropertiesBean);
}
}
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties) {
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@@ -54,7 +89,8 @@ public class KStreamBinderConfiguration {
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver) {
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue,
keyValueSerdeResolver);

View File

@@ -95,7 +95,7 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KStream.class)) {
Assert.notNull(delegate, "Trying to invoke " + methodInvocation
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}

View File

@@ -16,17 +16,15 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
@@ -50,7 +48,7 @@ class KTableBinder extends
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
@@ -58,48 +56,21 @@ class KTableBinder extends
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.KafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group, KTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
this.kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
StreamsConfig streamsConfig = this.KafkaStreamsBindingInformationCatalogue.getStreamsConfig(inputTarget);
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
extendedConsumerProperties.getExtension());
}
SendToDlqAndContinue sendToDlqAndContinue = this.getApplicationContext().getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
DeserializationExceptionHandler deserializationExceptionHandler = streamsConfig.defaultDeserializationExceptionHandler();
if (deserializationExceptionHandler instanceof SendToDlqAndContinue) {
((SendToDlqAndContinue) deserializationExceptionHandler).addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
}
}
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@@ -118,4 +89,14 @@ class KTableBinder extends
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -16,27 +16,40 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* @author Soby Chacko
*/
@SuppressWarnings("ALL")
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class KTableBinderConfiguration {
@Autowired
private KafkaProperties kafkaProperties;
@Autowired
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
@Bean
@ConditionalOnBean(name = "outerContext")
public BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties) {
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}

View File

@@ -78,7 +78,7 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KTable.class)) {
Assert.notNull(delegate, "Trying to invoke " + methodInvocation
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
@@ -86,7 +86,7 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KStream method invocations are permitted");
throw new IllegalStateException("Only KTable method invocations are permitted");
}
}
}

View File

@@ -17,15 +17,18 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
@@ -34,10 +37,15 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.core.env.Environment;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author Marius Bogoevici
@@ -46,6 +54,7 @@ import org.springframework.util.ObjectUtils;
*/
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
@ConditionalOnBean(BindingService.class)
@AutoConfigureAfter(BindingServiceConfiguration.class)
public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
@@ -54,30 +63,66 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
Environment environment) {
KafkaProperties kafkaProperties = binderConfigurationProperties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
if (kafkaProperties.getStreams().getApplicationId() == null) {
String applicationName = environment.getProperty("spring.application.name");
if (applicationName != null) {
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationName);
}
}
return new KafkaStreamsConfiguration(streamsProperties);
}
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
props.put(StreamsConfig.APPLICATION_ID_CONFIG, binderConfigurationProperties.getApplicationId());
public Map<String, Object> streamConfigGlobalProperties(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
}
else {
Object bootstrapServerConfig = properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootstrapServerConfig instanceof String) {
@SuppressWarnings("unchecked")
String bootStrapServers = (String) properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
}
}
}
String binderProvidedApplicationId = binderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, binderProvidedApplicationId);
}
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class);
}
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConfiguration())) {
props.putAll(binderConfigurationProperties.getConfiguration());
properties.putAll(binderConfigurationProperties.getConfiguration());
}
return props;
return properties.entrySet().stream().collect(
Collectors.toMap(e -> String.valueOf(e.getKey()), Map.Entry::getValue));
}
@Bean
@@ -99,10 +144,12 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ObjectProvider<CleanupConfig> cleanupConfig) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(bindingServiceProperties,
kafkaStreamsExtendedBindingProperties, keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters, binderConfigurationProperties);
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters, binderConfigurationProperties,
cleanupConfig.getIfUnique());
}
@Bean
@@ -126,6 +173,11 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
return new KTableBoundElementFactory(bindingServiceProperties);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new GlobalKTableBoundElementFactory(bindingServiceProperties);
}
@Bean
public SendToDlqAndContinue sendToDlqAndContinue() {
return new SendToDlqAndContinue();

View File

@@ -0,0 +1,109 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.ImportBeanDefinitionRegistrar;
import org.springframework.core.type.AnnotationMetadata;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
*/
class KafkaStreamsBinderUtils {
static void prepareConsumerBinding(String name, String group, Object inputTarget,
ApplicationContext context,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
StreamsConfig streamsConfig = kafkaStreamsBindingInformationCatalogue.getStreamsConfig(inputTarget);
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
extendedConsumerProperties.getExtension());
}
SendToDlqAndContinue sendToDlqAndContinue = context.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
DeserializationExceptionHandler deserializationExceptionHandler = streamsConfig.defaultDeserializationExceptionHandler();
if (deserializationExceptionHandler instanceof SendToDlqAndContinue) {
((SendToDlqAndContinue) deserializationExceptionHandler).addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
}
}
}
}
static class KafkaStreamsMissingBeansRegistrar implements ImportBeanDefinitionRegistrar {
private static final String BEAN_NAME = "outerContext";
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition configBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBinderConfigurationProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), configBean);
AbstractBeanDefinition catalogueBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBindingInformationCatalogue.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), catalogueBean);
}
}
}
}

View File

@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
@@ -41,7 +43,9 @@ import org.springframework.util.StringUtils;
*
* @author Soby Chacko
*/
class KafkaStreamsMessageConversionDelegate {
public class KafkaStreamsMessageConversionDelegate {
private final static Log LOG = LogFactory.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
@@ -105,32 +109,34 @@ class KafkaStreamsMessageConversionDelegate {
boolean isValidRecord = false;
try {
if (valueClass.isAssignableFrom(o2.getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
}
else if (o2 instanceof Message) {
if (valueClass.isAssignableFrom(((Message) o2).getPayload().getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, ((Message) o2).getPayload()));
//if the record is a tombstone, ignore and exit from processing further.
if (o2 != null) {
if (valueClass.isAssignableFrom(o2.getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
} else if (o2 instanceof Message) {
if (valueClass.isAssignableFrom(((Message) o2).getPayload().getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, ((Message) o2).getPayload()));
} else {
convertAndSetMessage(o, valueClass, messageConverter, (Message) o2);
}
} else if (o2 instanceof String || o2 instanceof byte[]) {
Message<?> message = MessageBuilder.withPayload(o2).build();
convertAndSetMessage(o, valueClass, messageConverter, message);
} else {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
}
else {
convertAndSetMessage(o, valueClass, messageConverter, (Message) o2);
}
}
else if (o2 instanceof String || o2 instanceof byte[]) {
Message<?> message = MessageBuilder.withPayload(o2).build();
convertAndSetMessage(o, valueClass, messageConverter, message);
isValidRecord = true;
}
else {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
LOG.info("Received a tombstone record. This will be skipped from further processing.");
}
isValidRecord = true;
}
catch (Exception ignored) {
//pass through
}
return isValidRecord;
},
//sedond filter that catches any messages for which an exception thrown in the first filter above.
//second filter that catches any messages for which an exception thrown in the first filter above.
(k, v) -> true
);
//process errors from the second filter in the branch above.
@@ -164,28 +170,22 @@ class KafkaStreamsMessageConversionDelegate {
@Override
public void process(Object o, Object o2) {
if (kstreamBindingInformationCatalogue.isDlqEnabled(bindingTarget)) {
String destination = context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) message.getPayload(), context.partition());
}
else {
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) o2, context.partition());
//Only continue if the record was not a tombstone.
if (o2 != null) {
if (kstreamBindingInformationCatalogue.isDlqEnabled(bindingTarget)) {
String destination = context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) message.getPayload(), context.partition());
} else {
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) o2, context.partition());
}
} else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed.");
} else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
//quietly pass through. No action needed, this is similar to log and continue.
}
}
else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed.");
}
else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
//quietly pass through. No action needed, this is similar to log and continue.
}
}
@SuppressWarnings("deprecation")
@Override
public void punctuate(long timestamp) {
}
@Override

View File

@@ -21,32 +21,36 @@ import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.Consumed;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.bind.PropertySourcesPlaceholdersResolver;
import org.springframework.boot.context.properties.source.ConfigurationPropertySources;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
@@ -59,13 +63,15 @@ import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.binding.StreamListenerSetupMethodOrchestrator;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.config.MergableProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.integration.support.utils.IntegrationUtils;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.StreamsBuilderFactoryBean;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
@@ -81,11 +87,12 @@ import org.springframework.util.StringUtils;
* The orchestration primarily focus on the following areas:
*
* 1. Allow multiple KStream output bindings (KStream branching) by allowing more than one output values on {@link SendTo}
* 2. Allow multiple inbound bindings for multiple KStream and or KTable types.
* 2. Allow multiple inbound bindings for multiple KStream and or KTable/GlobalKTable types.
* 3. Each StreamListener method that it orchestrates gets its own {@link StreamsBuilderFactoryBean} and {@link StreamsConfig}
*
* @author Soby Chacko
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
@@ -107,15 +114,18 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final CleanupConfig cleanupConfig;
private ConfigurableApplicationContext applicationContext;
KafkaStreamsStreamListenerSetupMethodOrchestrator(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
@@ -123,6 +133,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = streamListenerResultAdapters;
this.binderConfigurationProperties = binderConfigurationProperties;
this.cleanupConfig = cleanupConfig;
}
@Override
@@ -145,7 +156,8 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
for (int i = 0; i < method.getParameterCount(); i++) {
MethodParameter methodParameter = MethodParameter.forExecutable(method, i);
Class<?> parameterType = methodParameter.getParameterType();
if (parameterType.equals(KStream.class) || parameterType.equals(KTable.class)) {
if (parameterType.equals(KStream.class) || parameterType.equals(KTable.class)
|| parameterType.equals(GlobalKTable.class)) {
supports = true;
}
}
@@ -231,7 +243,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
streamsConfig = buildStreamsBuilderAndRetrieveConfig(method, applicationContext, bindingProperties);
streamsConfig = buildStreamsBuilderAndRetrieveConfig(method, applicationContext, inboundName);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = methodStreamsBuilderFactoryBeanMap.get(method);
@@ -278,6 +290,22 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
arguments[parameterIndex] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = bindingServiceProperties.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde ) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde));
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper = (GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
if (streamsConfig != null){
kafkaStreamsBindingInformationCatalogue.addStreamsConfigs(globalKTableWrapper, streamsConfig);
}
arguments[parameterIndex] = table;
}
}
catch (Exception e) {
throw new IllegalStateException(e);
@@ -292,9 +320,18 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
private <K,V> KTable<K,V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v) {
return streamsBuilder.table(bindingServiceProperties.getBindingDestination(destination),
Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v));
getMaterialized(storeName, k, v));
}
private <K,V> GlobalKTable<K,V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder, String destination, String storeName, Serde<K> k, Serde<V> v) {
return streamsBuilder.globalTable(bindingServiceProperties.getBindingDestination(destination),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
@@ -332,7 +369,6 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
}
private KStream<?, ?> getkStream(String inboundName, KafkaStreamsStateStoreProperties storeSpec,
BindingProperties bindingProperties, StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde) {
@@ -358,11 +394,11 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
stream = stream.mapValues(value -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (!StringUtils.isEmpty(contentType) && !nativeDecoding) {
Message<?> message = MessageBuilder.withPayload(value)
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
returnValue = message;
} else {
}
else {
returnValue = value;
}
return returnValue;
@@ -371,33 +407,33 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class)) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable, its done by the broker.
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private StreamsConfig buildStreamsBuilderAndRetrieveConfig(Method method, ApplicationContext applicationContext,
BindingProperties bindingProperties) {
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
StreamsBuilderFactoryBean streamsBuilder = new StreamsBuilderFactoryBean();
streamsBuilder.setAutoStartup(false);
String uuid = UUID.randomUUID().toString();
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" + uuid, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" + uuid, StreamsBuilderFactoryBean.class);
String group = bindingProperties.getGroup();
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties", Map.class);
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, group);
KafkaStreamsConsumerProperties extendedConsumerProperties = kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(inboundName);
//Need to apply the default extended properties here as it is not yet done by the BindingService in the binding lifecycle.
handleExtendedDefaultProperties(kafkaStreamsExtendedBindingProperties,
extendedConsumerProperties);
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
//Custom StreamsConfig implementation that overrides to guarantee that the deserialization handler is cached.
StreamsConfig streamsConfig = new StreamsConfig(streamConfigGlobalProperties) {
@@ -418,16 +454,46 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator implements StreamListene
return super.getConfiguredInstance(key, clazz);
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(streamsConfig)
: new StreamsBuilderFactoryBean(streamsConfig, this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" + method.getName(), streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" + method.getName(), StreamsBuilderFactoryBean.class);
BeanDefinition streamsConfigBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<StreamsConfig>) streamsConfig.getClass(), () -> streamsConfig)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("streamsConfig-" + uuid, streamsConfigBeanDefinition);
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("streamsConfig-" + method.getName(), streamsConfigBeanDefinition);
streamsBuilder.setStreamsConfig(streamsConfig);
methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
return streamsConfig;
}
// This method is mostly copied from core. We should refactor the original method in core so that it is publicly available
// as a utility method. This is currently hidden as a private method in BindingService.
private void handleExtendedDefaultProperties(KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
MergableProperties extendedProperties) {
String defaultsPrefix = kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
if (defaultsPrefix != null) {
Class<? extends BinderSpecificPropertiesProvider> extendedPropertiesEntryClass = kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
if (BinderSpecificPropertiesProvider.class.isAssignableFrom(extendedPropertiesEntryClass)) {
org.springframework.boot.context.properties.bind.Binder extendedPropertiesResolverBinder =
new org.springframework.boot.context.properties.bind.Binder(ConfigurationPropertySources.get(applicationContext.getEnvironment()),
new PropertySourcesPlaceholdersResolver(applicationContext.getEnvironment()),
IntegrationUtils.getConversionService(this.applicationContext.getBeanFactory()), null);
BinderSpecificPropertiesProvider defaultProperties = BeanUtils.instantiateClass(extendedPropertiesEntryClass);
extendedPropertiesResolverBinder.bind(defaultsPrefix, Bindable.ofInstance(defaultProperties));
Object binderExtendedProperties = defaultProperties.getConsumer();
((MergableProperties)binderExtendedProperties).merge(extendedProperties);
}
}
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;

View File

@@ -35,7 +35,7 @@ public class KafkaStreamsBinderConfigurationProperties extends KafkaBinderConfig
sendToDlq
}
private String applicationId = "default";
private String applicationId;
public String getApplicationId() {
return applicationId;

View File

@@ -16,10 +16,12 @@
package org.springframework.cloud.stream.binder.kafka.streams.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* @author Marius Bogoevici
*/
public class KafkaStreamsBindingProperties {
public class KafkaStreamsBindingProperties implements BinderSpecificPropertiesProvider {
private KafkaStreamsConsumerProperties consumer = new KafkaStreamsConsumerProperties();

View File

@@ -24,6 +24,8 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerPro
*/
public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
private String applicationId;
/**
* Key serde specified per binding.
*/
@@ -39,6 +41,14 @@ public class KafkaStreamsConsumerProperties extends KafkaConsumerProperties {
*/
private String materializedAs;
public String getApplicationId() {
return applicationId;
}
public void setApplicationId(String applicationId) {
this.applicationId = applicationId;
}
public String getKeySerde() {
return keySerde;
}

View File

@@ -20,6 +20,7 @@ import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.ExtendedBindingProperties;
/**
@@ -29,6 +30,8 @@ import org.springframework.cloud.stream.binder.ExtendedBindingProperties;
public class KafkaStreamsExtendedBindingProperties
implements ExtendedBindingProperties<KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.streams.default";
private Map<String, KafkaStreamsBindingProperties> bindings = new HashMap<>();
public Map<String, KafkaStreamsBindingProperties> getBindings() {
@@ -58,4 +61,14 @@ public class KafkaStreamsExtendedBindingProperties
return new KafkaStreamsProducerProperties();
}
}
@Override
public String getDefaultsPrefix() {
return DEFAULTS_PREFIX;
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaStreamsBindingProperties.class;
}
}

View File

@@ -2,5 +2,7 @@ kstream:\
org.springframework.cloud.stream.binder.kafka.streams.KStreamBinderConfiguration
ktable:\
org.springframework.cloud.stream.binder.kafka.streams.KTableBinderConfiguration
globalktable:\
org.springframework.cloud.stream.binder.kafka.streams.GlobalKTableBinderConfiguration

View File

@@ -0,0 +1,80 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.bootstrap;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
/**
* @author Soby Chacko
*/
@Ignore("Temporarily disabling the test as builds are getting slower due to this.")
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.bindings.input.binder=kBind1",
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
applicationContext.close();
}
@Test
public void testKafkaStreamsBinderWithStandardConfigurationCanStart() {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
applicationContext.close();
}
@SpringBootApplication
@EnableBinding(StreamSourceProcessor.class)
static class SimpleApplication {
@StreamListener
public void handle(@Input("input") KStream<Object, String> stream) {
}
}
interface StreamSourceProcessor {
@Input("input")
KStream<?, ?> inputStream();
}
}

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Arrays;
import java.util.Map;
@@ -47,7 +47,8 @@ import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
@@ -68,11 +69,13 @@ import static org.mockito.Mockito.verify;
public abstract class DeserializationErrorHandlerByKafkaTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts", "error.words.group",
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts", "error.words.group",
"error.word1.groupx", "error.word2.groupx");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@SpyBean
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
private static Consumer<String, String> consumer;
@@ -99,6 +102,7 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=" +
@@ -135,6 +139,7 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.bindings.input.destination=word1,word2",
"spring.cloud.stream.kafka.streams.default.consumer.applicationId=deser-kafka-dlq-multi-input",
"spring.cloud.stream.bindings.input.group=groupx",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=" +

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
@@ -42,7 +42,8 @@ import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
@@ -62,11 +63,13 @@ import static org.mockito.Mockito.verify;
public abstract class DeserializtionErrorHandlerByBinderTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id", "error.foos.foobar-group",
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id", "error.foos.foobar-group",
"error.foos1.fooz-group", "error.foos2.fooz-group");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@SpyBean
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
private static Consumer<Integer, String> consumer;
@@ -101,6 +104,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deserializationByBinderAndDlqTests",
"spring.cloud.stream.bindings.input.group=foobar-group"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
@@ -135,10 +139,9 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.bindings.output.producer.headerMode=raw",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deserializationByBinderAndDlqTestsWithMultipleInputs",
"spring.cloud.stream.bindings.input.group=fooz-group"},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Arrays;
@@ -47,7 +47,8 @@ import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -67,7 +68,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderMultipleInputTopicsTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@@ -98,10 +101,10 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=WordCountProcessorApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
@@ -43,7 +43,8 @@ import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -56,7 +57,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<Integer, String> consumer;
@@ -86,9 +89,8 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -14,8 +14,9 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.time.Duration;
import java.util.Arrays;
import java.util.Date;
import java.util.Map;
@@ -23,12 +24,16 @@ import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyWindowStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -45,10 +50,15 @@ import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.test.util.TestUtils;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.core.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -62,7 +72,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsBinderWordCountIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@@ -81,15 +93,16 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -98,22 +111,75 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
} finally {
context.close();
}
}
@Test
public void testKstreamWordCountWithInputBindingLevelApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
receiveAndValidate(context);
//Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
ReadOnlyWindowStore<Object, Object> store = kafkaStreams.store("foo-WordCounts", QueryableStoreTypes.windowStore());
assertThat(store).isNotNull();
sendTombStoneRecordsAndVerifyGracefulHandling();
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean, "cleanupConfig",
CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isTrue();
assertThat(cleanup.cleanupOnStop()).isFalse();
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
finally {
pf.destroy();
}
}
private void sendTombStoneRecordsAndVerifyGracefulHandling() throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault(null);
ConsumerRecords<String, String> received = consumer.poll(Duration.ofMillis(5000));
//By asserting that the received record is empty, we are ensuring that the tombstone record
//was handled by the binder gracefully.
assertThat(received.isEmpty()).isTrue();
}
finally {
pf.destroy();
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -128,11 +194,6 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
@SendTo("output")
public KStream<?, WordCount> process(@Input("input") KStream<Object, String> input) {
input.map((k,v) -> {
System.out.println(k);
System.out.println(v);
return new KeyValue<>(k,v);
});
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, value) -> new KeyValue<>(value, value))
@@ -143,6 +204,11 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
}
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
static class WordCount {

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
@@ -25,6 +25,7 @@ import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Serialized;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreTypes;
@@ -39,6 +40,7 @@ import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -46,7 +48,8 @@ import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -59,7 +62,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsInteractiveQueryIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@@ -88,8 +93,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-abc",
"--spring.cloud.stream.kafka.streams.binder.configuration.application.server=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
@@ -137,7 +141,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
.filter((key, product) -> product.getId() == 123)
.map((key, value) -> new KeyValue<>(value.id, value))
.groupByKey(Serialized.with(new Serdes.IntegerSerde(), new JsonSerde<>(Product.class)))
.count("prod-id-count-store")
.count(Materialized.as("prod-id-count-store"))
.toStream()
.map((key, value) -> new KeyValue<>(null, "Count for product with ID 123: " + value));
}

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Arrays;
import java.util.Map;
@@ -47,7 +47,8 @@ import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
@@ -68,10 +69,12 @@ import static org.mockito.Mockito.verify;
public abstract class KafkaStreamsNativeEncodingDecodingTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@SpyBean
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate;
private static Consumer<String, String> consumer;
@@ -97,7 +100,9 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true"},
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=NativeEncodingDecodingEnabledTests-abc"
},
webEnvironment= SpringBootTest.WebEnvironment.NONE
)
public static class NativeEncodingDecodingEnabledTests extends KafkaStreamsNativeEncodingDecodingTests {
@@ -117,7 +122,8 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
}
}
@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.NONE)
@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.NONE,
properties = "spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=NativeEncodingDecodingEnabledTests-xyz")
public static class NativeEncodingDecodingDisabledTests extends KafkaStreamsNativeEncodingDecodingTests {
@Test

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
@@ -37,7 +37,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
@@ -49,7 +50,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsStateStoreIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKstreamStateStore() throws Exception {
@@ -61,7 +64,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
@@ -116,11 +119,6 @@ public class KafkaStreamsStateStoreIntegrationTests {
processed = true;
}
@Override
public void punctuate(long l) {
}
@Override
public void close() {
if (state != null) {

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
@@ -38,11 +38,15 @@ import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.integration.test.util.TestUtils;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.StreamsBuilderFactoryBean;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -56,7 +60,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts-id");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@@ -85,14 +91,21 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
receiveAndValidateFoo(context);
} finally {
//Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process",
StreamsBuilderFactoryBean.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean, "cleanupConfig",
CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();
assertThat(cleanup.cleanupOnStop()).isTrue();
}
finally {
context.close();
}
}

View File

@@ -0,0 +1,322 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class StreamToGlobalKTableJoinIntegrationTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "enriched-order");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<Long, EnrichedOrder> consumer;
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("inputX")
GlobalKTable<?, ?> inputX();
@Input("inputY")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@StreamListener
@SendTo("output")
public KStream<Long, EnrichedOrder> process(@Input("input") KStream<Long, Order> ordersStream,
@Input("inputX") GlobalKTable<Long, Customer> customers,
@Input("inputY") GlobalKTable<Long, Product> products) {
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(customers,
(orderId, order) -> order.getCustomerId(),
(order, customer) -> new CustomerOrder(customer, order));
return customerOrdersStream.join(products,
(orderId, customerOrder) -> customerOrder
.productId(),
(customerOrder, product) -> {
EnrichedOrder enrichedOrder = new EnrichedOrder();
enrichedOrder.setProduct(product);
enrichedOrder.setCustomer(customerOrder.customer);
enrichedOrder.setOrder(customerOrder.order);
return enrichedOrder;
});
}
}
@Test
public void testStreamToGlobalKTable() throws Exception {
SpringApplication app = new SpringApplication(StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.inputX.destination=customers",
"--spring.cloud.stream.bindings.inputY.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.inputX.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.inputY.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputY.consumer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputY.consumer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests$EnrichedOrderSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
CustomerSerde customerSerde = new CustomerSerde();
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, customerSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(senderPropsCustomer);
KafkaTemplate<Long, Customer> template = new KafkaTemplate<>(pfCustomer, true);
template.setDefaultTopic("customers");
for (long i = 0; i < 5; i++) {
final Customer customer = new Customer();
customer.setName("customer-" + i);
template.sendDefault(i, customer);
}
Map<String, Object> senderPropsProduct = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
ProductSerde productSerde = new ProductSerde();
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, productSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(senderPropsProduct);
KafkaTemplate<Long, Product> productTemplate = new KafkaTemplate<>(pfProduct, true);
productTemplate.setDefaultTopic("products");
for (long i = 0; i < 5; i++) {
final Product product = new Product();
product.setName("product-" + i);
productTemplate.sendDefault(i, product);
}
Map<String, Object> senderPropsOrder = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
OrderSerde orderSerde = new OrderSerde();
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, orderSerde.serializer().getClass());
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
orderTemplate.setDefaultTopic("orders");
for (long i = 0; i < 5; i++) {
final Order order = new Order();
order.setCustomerId(i);
order.setProductId(i);
orderTemplate.sendDefault(i, order);
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
EnrichedOrderSerde enrichedOrderSerde = new EnrichedOrderSerde();
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, enrichedOrderSerde.deserializer().getClass());
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "org.springframework.cloud.stream.binder.kafka.streams.integration.StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
DefaultKafkaConsumerFactory<Long, EnrichedOrder> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "enriched-order");
int count = 0;
long start = System.currentTimeMillis();
List<KeyValue<Long, EnrichedOrder>> enrichedOrders = new ArrayList<>();
do {
ConsumerRecords<Long, EnrichedOrder> records = KafkaTestUtils.getRecords(consumer);
count = count + records.count();
for (ConsumerRecord<Long, EnrichedOrder> record : records) {
enrichedOrders.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < 5 && (System.currentTimeMillis() - start) < 30000);
assertThat(count == 5).isTrue();
assertThat(enrichedOrders.size() == 5).isTrue();
enrichedOrders.sort(Comparator.comparing(o -> o.key));
for (int i = 0; i < 5; i++) {
KeyValue<Long, EnrichedOrder> enrichedOrderKeyValue = enrichedOrders.get(i);
assertThat(enrichedOrderKeyValue.key == i).isTrue();
EnrichedOrder enrichedOrder = enrichedOrderKeyValue.value;
assertThat(enrichedOrder.getOrder().customerId == i).isTrue();
assertThat(enrichedOrder.getOrder().productId == i).isTrue();
assertThat(enrichedOrder.getCustomer().name.equals("customer-" + i)).isTrue();
assertThat(enrichedOrder.getProduct().name.equals("product-" + i)).isTrue();
}
pfCustomer.destroy();
pfProduct.destroy();
pfOrder.destroy();
consumer.close();
}
}
static class Order {
long customerId;
long productId;
public long getCustomerId() {
return customerId;
}
public void setCustomerId(long customerId) {
this.customerId = customerId;
}
public long getProductId() {
return productId;
}
public void setProductId(long productId) {
this.productId = productId;
}
}
static class Customer {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class Product {
String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
static class EnrichedOrder {
Product product;
Customer customer;
Order order;
public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}
public Customer getCustomer() {
return customer;
}
public void setCustomer(Customer customer) {
this.customer = customer;
}
public Order getOrder() {
return order;
}
public void setOrder(Order order) {
this.order = order;
}
}
private static class CustomerOrder {
private final Customer customer;
private final Order order;
CustomerOrder(final Customer customer, final Order order) {
this.customer = customer;
this.order = order;
}
long productId() {
return order.getProductId();
}
}
public static class OrderSerde extends JsonSerde<Order> {}
public static class CustomerSerde extends JsonSerde<Customer> {}
public static class ProductSerde extends JsonSerde<Product> {}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {}
}

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.ArrayList;
import java.util.Arrays;
@@ -54,7 +54,8 @@ import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -66,7 +67,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class StreamToTableJoinIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "output-topic");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "output-topic");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, Long> consumer;
@@ -113,11 +116,11 @@ public class StreamToTableJoinIntegrationTests {
}
@Test
public void testStreamToTable() throws Exception {
public void testStreamToTable() {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks",
"--spring.cloud.stream.bindings.inputX.destination=user-regions",
@@ -134,12 +137,9 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.bindings.inputX.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
// Input 1: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L),
@@ -160,7 +160,7 @@ public class StreamToTableJoinIntegrationTests {
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks");
for (KeyValue<String,Long> keyValue : userClicks) {
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
@@ -183,7 +183,7 @@ public class StreamToTableJoinIntegrationTests {
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions");
for (KeyValue<String,String> keyValue : userRegions) {
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
@@ -203,17 +203,11 @@ public class StreamToTableJoinIntegrationTests {
for (ConsumerRecord<String, Long> record : records) {
actualClicksPerRegion.add(new KeyValue<>(record.key(), record.value()));
}
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000 );
} while (count < expectedClicksPerRegion.size() && (System.currentTimeMillis() - start) < 30000);
assertThat(count == expectedClicksPerRegion.size()).isTrue();
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
}
catch (Exception e){
System.out.println(e);
}
finally {
context.close();
}
}
/**

View File

@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Arrays;
import java.util.Date;
@@ -47,7 +47,8 @@ import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
@@ -61,7 +62,9 @@ import static org.assertj.core.api.Assertions.assertThat;
public class WordCountMultipleBranchesIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts","foo","bar");
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true, "counts","foo","bar");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
private static Consumer<String, String> consumer;
@@ -139,10 +142,9 @@ public class WordCountMultipleBranchesIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=WordCountMultipleBranchesIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
try {

View File

@@ -4,7 +4,5 @@ spring.cloud.stream.bindings.output.contentType=application/json
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.bindings.output.producer.headerMode=raw
spring.cloud.stream.bindings.input.consumer.headerMode=raw
spring.cloud.stream.kafka.streams.timeWindow.length=5000
spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0

View File

@@ -1,5 +1,5 @@
eclipse.preferences.version=1
org.eclipse.jdt.ui.ignorelowercasenames=true
org.eclipse.jdt.ui.importorder=java;javax;com;org;org.springframework;ch.qos;\#;
org.eclipse.jdt.ui.importorder=java;javax;com;io.micrometer;org;org.springframework;ch.qos;\#;
org.eclipse.jdt.ui.ondemandthreshold=99
org.eclipse.jdt.ui.staticondemandthreshold=99

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M1</version>
<version>2.1.0.M3</version>
</parent>
<dependencies>
@@ -60,6 +60,26 @@
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2016-2017 the original author or authors.
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -78,25 +78,31 @@ public class KafkaBinderHealthIndicator implements HealthIndicator {
public Health call() {
try {
if (metadataConsumer == null) {
metadataConsumer = consumerFactory.createConsumer();
}
Set<String> downMessages = new HashSet<>();
for (String topic : KafkaBinderHealthIndicator.this.binder.getTopicsInUse().keySet()) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
if (KafkaBinderHealthIndicator.this.binder.getTopicsInUse().get(topic).getPartitionInfos()
.contains(partitionInfo) && partitionInfo.leader().id() == -1) {
downMessages.add(partitionInfo.toString());
synchronized(KafkaBinderHealthIndicator.this) {
if (metadataConsumer == null) {
metadataConsumer = consumerFactory.createConsumer();
}
}
}
if (downMessages.isEmpty()) {
return Health.up().build();
}
else {
return Health.down()
.withDetail("Following partitions in use have no leaders: ", downMessages.toString())
.build();
synchronized (metadataConsumer) {
Set<String> downMessages = new HashSet<>();
for (String topic : KafkaBinderHealthIndicator.this.binder.getTopicsInUse().keySet()) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
if (KafkaBinderHealthIndicator.this.binder.getTopicsInUse().get(topic).getPartitionInfos()
.contains(partitionInfo) && partitionInfo.leader().id() == -1) {
downMessages.add(partitionInfo.toString());
}
}
}
if (downMessages.isEmpty()) {
return Health.up().build();
}
else {
return Health.down()
.withDetail("Following partitions in use have no leaders: ", downMessages.toString())
.build();
}
}
}
catch (Exception e) {

View File

@@ -20,11 +20,17 @@ import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.TimeGauge;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.Consumer;
@@ -51,9 +57,12 @@ import org.springframework.util.ObjectUtils;
* @author Oleg Zhurakousky
* @author Jon Schneider
* @author Thomas Cheyney
* @author Gary Russell
*/
public class KafkaBinderMetrics implements MeterBinder, ApplicationListener<BindingCreatedEvent> {
private static final int DEFAULT_TIMEOUT = 60;
private final static Log LOG = LogFactory.getLog(KafkaBinderMetrics.class);
static final String METRIC_NAME = "spring.cloud.stream.binder.kafka.offset";
@@ -68,6 +77,8 @@ public class KafkaBinderMetrics implements MeterBinder, ApplicationListener<Bind
private Consumer<?, ?> metadataConsumer;
private int timeout = DEFAULT_TIMEOUT;
public KafkaBinderMetrics(KafkaMessageChannelBinder binder,
KafkaBinderConfigurationProperties binderConfigurationProperties,
ConsumerFactory<?, ?> defaultConsumerFactory, @Nullable MeterRegistry meterRegistry) {
@@ -84,6 +95,10 @@ public class KafkaBinderMetrics implements MeterBinder, ApplicationListener<Bind
this(binder, binderConfigurationProperties, null, null);
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
@Override
public void bindTo(MeterRegistry registry) {
for (Map.Entry<String, KafkaMessageChannelBinder.TopicInformation> topicInfo : this.binder.getTopicsInUse()
@@ -106,33 +121,56 @@ public class KafkaBinderMetrics implements MeterBinder, ApplicationListener<Bind
}
private double calculateConsumerLagOnTopic(String topic, String group) {
long lag = 0;
ExecutorService exec = Executors.newSingleThreadExecutor();
Future<Long> future = exec.submit(() -> {
long lag = 0;
try {
if (metadataConsumer == null) {
synchronized(KafkaBinderMetrics.this) {
if (metadataConsumer == null) {
metadataConsumer = createConsumerFactory(group).createConsumer();
}
}
}
synchronized (metadataConsumer) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
List<TopicPartition> topicPartitions = new LinkedList<>();
for (PartitionInfo partitionInfo : partitionInfos) {
topicPartitions.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
}
Map<TopicPartition, Long> endOffsets = metadataConsumer.endOffsets(topicPartitions);
for (Map.Entry<TopicPartition, Long> endOffset : endOffsets.entrySet()) {
OffsetAndMetadata current = metadataConsumer.committed(endOffset.getKey());
if (current != null) {
lag += endOffset.getValue() - current.offset();
}
else {
lag += endOffset.getValue();
}
}
}
}
catch (Exception e) {
LOG.debug("Cannot generate metric for topic: " + topic, e);
}
return lag;
});
try {
if (metadataConsumer == null) {
metadataConsumer = createConsumerFactory(group).createConsumer();
}
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
List<TopicPartition> topicPartitions = new LinkedList<>();
for (PartitionInfo partitionInfo : partitionInfos) {
topicPartitions.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
}
Map<TopicPartition, Long> endOffsets = metadataConsumer.endOffsets(topicPartitions);
for (Map.Entry<TopicPartition, Long> endOffset : endOffsets.entrySet()) {
OffsetAndMetadata current = metadataConsumer.committed(endOffset.getKey());
if (current != null) {
lag += endOffset.getValue() - current.offset();
}
else {
lag += endOffset.getValue();
}
}
return future.get(this.timeout, TimeUnit.SECONDS);
}
catch (Exception e) {
LOG.debug("Cannot generate metric for topic: " + topic, e);
catch (InterruptedException e) {
Thread.currentThread().interrupt();
return 0L;
}
catch (ExecutionException | TimeoutException e) {
return 0L;
}
finally {
exec.shutdownNow();
}
return lag;
}
private ConsumerFactory<?, ?> createConsumerFactory(String group) {

View File

@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
@@ -54,6 +55,7 @@ import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.NoSuchBeanDefinitionException;
import org.springframework.cloud.stream.binder.AbstractMessageChannelBinder;
import org.springframework.cloud.stream.binder.BinderHeaders;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.DefaultPollableMessageSource;
import org.springframework.cloud.stream.binder.EmbeddedHeaderUtils;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
@@ -67,32 +69,32 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerPro
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.context.Lifecycle;
import org.springframework.expression.common.LiteralExpression;
import org.springframework.expression.spel.standard.SpelExpressionParser;
import org.springframework.integration.StaticMessageHeaderAccessor;
import org.springframework.integration.acks.AcknowledgmentCallback;
import org.springframework.integration.channel.ChannelInterceptorAware;
import org.springframework.integration.core.MessageProducer;
import org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter;
import org.springframework.integration.kafka.inbound.KafkaMessageSource;
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
import org.springframework.integration.kafka.support.RawRecordHeaderErrorMessageStrategy;
import org.springframework.integration.support.AcknowledgmentCallback;
import org.springframework.integration.support.AcknowledgmentCallback.Status;
import org.springframework.integration.support.ErrorMessageStrategy;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.integration.support.StaticMessageHeaderAccessor;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
import org.springframework.kafka.listener.config.ContainerProperties;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.support.DefaultKafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
@@ -105,6 +107,7 @@ import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.MessagingException;
import org.springframework.messaging.support.ChannelInterceptor;
import org.springframework.messaging.support.ErrorMessage;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
@@ -130,12 +133,21 @@ public class KafkaMessageChannelBinder extends
AbstractMessageChannelBinder<ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>, KafkaTopicProvisioner>
implements ExtendedPropertiesBinder<MessageChannel, KafkaConsumerProperties, KafkaProducerProperties> {
public static final String X_EXCEPTION_FQCN = "x-exception-fqcn";
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_TOPIC = "x-original-topic";
public static final String X_ORIGINAL_PARTITION = "x-original-partition";
public static final String X_ORIGINAL_OFFSET = "x-original-offset";
public static final String X_ORIGINAL_TIMESTAMP = "x-original-timestamp";
public static final String X_ORIGINAL_TIMESTAMP_TYPE = "x-original-timestamp-type";
private final KafkaBinderConfigurationProperties configurationProperties;
@@ -156,9 +168,9 @@ public class KafkaMessageChannelBinder extends
super(headersToMap(configurationProperties), provisioningProvider, containerCustomizer);
this.configurationProperties = configurationProperties;
if (StringUtils.hasText(configurationProperties.getTransaction().getTransactionIdPrefix())) {
this.transactionManager = new KafkaTransactionManager<>(
getProducerFactory(configurationProperties.getTransaction().getTransactionIdPrefix(),
new ExtendedProducerProperties<>(configurationProperties.getTransaction().getProducer())));
this.transactionManager = new KafkaTransactionManager<>(getProducerFactory(
configurationProperties.getTransaction().getTransactionIdPrefix(), new ExtendedProducerProperties<>(
configurationProperties.getTransaction().getProducer().getExtension())));
}
else {
this.transactionManager = null;
@@ -203,10 +215,28 @@ public class KafkaMessageChannelBinder extends
return this.extendedBindingProperties.getExtendedProducerProperties(channelName);
}
@Override
public String getDefaultsPrefix() {
return this.extendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.extendedBindingProperties.getExtendedPropertiesEntryClass();
}
@Override
protected MessageHandler createProducerMessageHandler(final ProducerDestination destination,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties, MessageChannel errorChannel)
throws Exception {
throw new IllegalStateException("The abstract binder should not call this method");
}
@Override
protected MessageHandler createProducerMessageHandler(final ProducerDestination destination,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
MessageChannel channel, MessageChannel errorChannel)
throws Exception {
/*
* IMPORTANT: With a transactional binder, individual producer properties for Kafka are
* ignored; the global binder (spring.cloud.stream.kafka.binder.transaction.producer.*)
@@ -226,20 +256,18 @@ public class KafkaMessageChannelBinder extends
return partitionsFor;
});
this.topicsInUse.put(destination.getName(), new TopicInformation(null, partitions));
if (producerProperties.getPartitionCount() < partitions.size()) {
if (producerProperties.isPartitioned() && producerProperties.getPartitionCount() < partitions.size()) {
if (this.logger.isInfoEnabled()) {
this.logger.info("The `partitionCount` of the producer for topic " + destination.getName() + " is "
+ producerProperties.getPartitionCount() + ", smaller than the actual partition count of "
+ partitions.size() + " of the topic. The larger number will be used instead.");
+ partitions.size() + " for the topic. The larger number will be used instead.");
}
/*
* This is dirty; it relies on the fact that we, and the partition interceptor, share a
* hard reference to the producer properties instance. But I don't see another way to fix
* it since the interceptor has already been added to the channel, and we don't have
* access to the channel here; if we did, we could inject the proper partition count
* there. TODO: Consider this when doing the 2.0 binder restructuring.
*/
producerProperties.setPartitionCount(partitions.size());
List<ChannelInterceptor> interceptors = ((ChannelInterceptorAware) channel).getChannelInterceptors();
interceptors.forEach(interceptor -> {
if (interceptor instanceof PartitioningInterceptor) {
((PartitioningInterceptor) interceptor).setPartitionCount(partitions.size());
}
});
}
KafkaTemplate<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFB);
@@ -322,6 +350,14 @@ public class KafkaMessageChannelBinder extends
return producerFactory;
}
@Override
protected boolean useNativeEncoding(ExtendedProducerProperties<KafkaProducerProperties> producerProperties) {
if (this.transactionManager != null) {
return this.configurationProperties.getTransaction().getProducer().isUseNativeEncoding();
}
return super.useNativeEncoding(producerProperties);
}
@Override
@SuppressWarnings("unchecked")
protected MessageProducer createConsumerEndpoint(final ConsumerDestination destination, final String group,
@@ -398,14 +434,14 @@ public class KafkaMessageChannelBinder extends
// end of these won't be needed...
if (!extendedConsumerProperties.getExtension().isAutoCommitOffset()) {
messageListenerContainer.getContainerProperties()
.setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
.setAckMode(ContainerProperties.AckMode.MANUAL);
messageListenerContainer.getContainerProperties().setAckOnError(false);
}
else {
messageListenerContainer.getContainerProperties()
.setAckOnError(isAutoCommitOnError(extendedConsumerProperties));
if (extendedConsumerProperties.getExtension().isAckEachRecord()) {
messageListenerContainer.getContainerProperties().setAckMode(AckMode.RECORD);
messageListenerContainer.getContainerProperties().setAckMode(ContainerProperties.AckMode.RECORD);
}
}
if (this.logger.isDebugEnabled()) {
@@ -640,6 +676,7 @@ public class KafkaMessageChannelBinder extends
return new RawRecordHeaderErrorMessageStrategy();
}
@SuppressWarnings("unchecked")
@Override
protected MessageHandler getErrorMessageHandler(final ConsumerDestination destination, final String group,
final ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
@@ -652,11 +689,11 @@ public class KafkaMessageChannelBinder extends
new ExtendedProducerProperties<>(dlqProducerProperties));
final KafkaTemplate<?,?> kafkaTemplate = new KafkaTemplate<>(producerFactory);
@SuppressWarnings({"unchecked", "rawtypes"})
@SuppressWarnings("rawtypes")
DlqSender<?,?> dlqSender = new DlqSender(kafkaTemplate);
return message -> {
@SuppressWarnings("unchecked")
final ConsumerRecord<Object, Object> record = message.getHeaders()
.get(KafkaHeaders.RAW_DATA, ConsumerRecord.class);
@@ -683,11 +720,25 @@ public class KafkaMessageChannelBinder extends
Headers kafkaHeaders = new RecordHeaders(record.headers().toArray());
AtomicReference<ConsumerRecord<?, ?>> recordToSend = new AtomicReference<>(record);
if (message.getPayload() instanceof Throwable) {
Throwable throwable = (Throwable) message.getPayload();
HeaderMode headerMode = properties.getHeaderMode();
if (headerMode == null || HeaderMode.headers.equals(headerMode)) {
kafkaHeaders.add(new RecordHeader(X_ORIGINAL_TOPIC,
record.topic().getBytes(StandardCharsets.UTF_8)));
kafkaHeaders.add(
new RecordHeader(X_ORIGINAL_TOPIC, record.topic().getBytes(StandardCharsets.UTF_8)));
kafkaHeaders.add(new RecordHeader(X_ORIGINAL_PARTITION,
ByteBuffer.allocate(Integer.BYTES).putInt(record.partition()).array()));
kafkaHeaders.add(new RecordHeader(X_ORIGINAL_OFFSET,
ByteBuffer.allocate(Long.BYTES).putLong(record.offset()).array()));
kafkaHeaders.add(new RecordHeader(X_ORIGINAL_TIMESTAMP,
ByteBuffer.allocate(Long.BYTES).putLong(record.timestamp()).array()));
kafkaHeaders.add(new RecordHeader(X_ORIGINAL_TIMESTAMP_TYPE,
record.timestampType().toString().getBytes(StandardCharsets.UTF_8)));
kafkaHeaders.add(new RecordHeader(X_EXCEPTION_FQCN,
throwable.getClass().getName().getBytes(StandardCharsets.UTF_8)));
kafkaHeaders.add(new RecordHeader(X_EXCEPTION_MESSAGE,
throwable.getMessage().getBytes(StandardCharsets.UTF_8)));
kafkaHeaders.add(new RecordHeader(X_EXCEPTION_STACKTRACE,
@@ -696,14 +747,18 @@ public class KafkaMessageChannelBinder extends
else if (HeaderMode.embeddedHeaders.equals(headerMode)) {
try {
MessageValues messageValues = EmbeddedHeaderUtils
.extractHeaders(MessageBuilder.withPayload((byte[]) record.value()).build(),
false);
.extractHeaders(MessageBuilder.withPayload((byte[]) record.value()).build(), false);
messageValues.put(X_ORIGINAL_TOPIC, record.topic());
messageValues.put(X_ORIGINAL_PARTITION, record.partition());
messageValues.put(X_ORIGINAL_OFFSET, record.offset());
messageValues.put(X_ORIGINAL_TIMESTAMP, record.timestamp());
messageValues.put(X_ORIGINAL_TIMESTAMP_TYPE, record.timestampType().toString());
messageValues.put(X_EXCEPTION_FQCN, throwable.getClass().getName());
messageValues.put(X_EXCEPTION_MESSAGE, throwable.getMessage());
messageValues.put(X_EXCEPTION_STACKTRACE, getStackTraceAsString(throwable));
final String[] headersToEmbed = new ArrayList<>(messageValues.keySet()).toArray(
new String[messageValues.keySet().size()]);
final String[] headersToEmbed = new ArrayList<>(messageValues.keySet())
.toArray(new String[messageValues.keySet().size()]);
byte[] payload = EmbeddedHeaderUtils.embedHeaders(messageValues,
EmbeddedHeaderUtils.headersToEmbed(headersToEmbed));
recordToSend.set(new ConsumerRecord<Object, Object>(record.topic(), record.partition(),
@@ -715,8 +770,7 @@ public class KafkaMessageChannelBinder extends
}
}
String dlqName = StringUtils.hasText(kafkaConsumerProperties.getDlqName())
? kafkaConsumerProperties.getDlqName()
: "error." + record.topic() + "." + group;
? kafkaConsumerProperties.getDlqName() : "error." + record.topic() + "." + group;
dlqSender.sendToDlq(recordToSend.get(), kafkaHeaders, dlqName);
};
}
@@ -747,10 +801,10 @@ public class KafkaMessageChannelBinder extends
((MessagingException) message.getPayload()).getFailedMessage());
if (ack != null) {
if (isAutoCommitOnError(properties)) {
ack.acknowledge(Status.REJECT);
ack.acknowledge(AcknowledgmentCallback.Status.REJECT);
}
else {
ack.acknowledge(Status.REQUEUE);
ack.acknowledge(AcknowledgmentCallback.Status.REQUEUE);
}
}
}

View File

@@ -18,6 +18,8 @@ package org.springframework.cloud.stream.binder.kafka.config;
import java.io.IOException;
import javax.security.auth.login.AppConfigurationEntry;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
@@ -59,13 +61,15 @@ import org.springframework.lang.Nullable;
*/
@Configuration
@ConditionalOnMissingBean(Binder.class)
@Import({KafkaAutoConfiguration.class, PropertyPlaceholderAutoConfiguration.class, KafkaBinderHealthIndicatorConfiguration.class })
@Import({ KafkaAutoConfiguration.class, PropertyPlaceholderAutoConfiguration.class,
KafkaBinderHealthIndicatorConfiguration.class })
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
public class KafkaBinderConfiguration {
@Autowired
private KafkaExtendedBindingProperties kafkaExtendedBindingProperties;
@SuppressWarnings("rawtypes")
@Autowired
private ProducerListener producerListener;
@@ -82,9 +86,10 @@ public class KafkaBinderConfiguration {
return new KafkaTopicProvisioner(configurationProperties, this.kafkaProperties);
}
@SuppressWarnings("unchecked")
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder(KafkaBinderConfigurationProperties configurationProperties,
KafkaTopicProvisioner provisioningProvider, @Nullable ListenerContainerCustomizer<AbstractMessageListenerContainer<?,?>> listenerContainerCustomizer) {
KafkaTopicProvisioner provisioningProvider, @Nullable ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> listenerContainerCustomizer) {
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
configurationProperties, provisioningProvider, listenerContainerCustomizer);
@@ -93,6 +98,7 @@ public class KafkaBinderConfiguration {
return kafkaMessageChannelBinder;
}
@SuppressWarnings("rawtypes")
@Bean
@ConditionalOnMissingBean(ProducerListener.class)
ProducerListener producerListener() {
@@ -100,8 +106,34 @@ public class KafkaBinderConfiguration {
}
@Bean
public KafkaJaasLoginModuleInitializer jaasInitializer() throws IOException {
return new KafkaJaasLoginModuleInitializer();
@ConditionalOnMissingBean(KafkaJaasLoginModuleInitializer.class)
public KafkaJaasLoginModuleInitializer jaasInitializer(KafkaBinderConfigurationProperties configurationProperties) throws IOException {
KafkaJaasLoginModuleInitializer kafkaJaasLoginModuleInitializer = new KafkaJaasLoginModuleInitializer();
JaasLoginModuleConfiguration jaas = configurationProperties.getJaas();
if (jaas != null) {
kafkaJaasLoginModuleInitializer.setLoginModule(jaas.getLoginModule());
KafkaJaasLoginModuleInitializer.ControlFlag controlFlag = null;
AppConfigurationEntry.LoginModuleControlFlag controlFlagValue = jaas.getControlFlagValue();
if (AppConfigurationEntry.LoginModuleControlFlag.OPTIONAL.equals(controlFlagValue)) {
controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.OPTIONAL;
}
else if (AppConfigurationEntry.LoginModuleControlFlag.REQUIRED.equals(controlFlagValue)) {
controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.REQUIRED;
}
else if (AppConfigurationEntry.LoginModuleControlFlag.REQUISITE.equals(controlFlagValue)) {
controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.REQUISITE;
}
else if (AppConfigurationEntry.LoginModuleControlFlag.SUFFICIENT.equals(controlFlagValue)) {
controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.SUFFICIENT;
}
if (controlFlag != null) {
kafkaJaasLoginModuleInitializer.setControlFlag(controlFlag);
}
kafkaJaasLoginModuleInitializer.setOptions(jaas.getOptions());
}
return kafkaJaasLoginModuleInitializer;
}
/**
@@ -125,6 +157,7 @@ public class KafkaBinderConfiguration {
}
@SuppressWarnings("unused")
public static class JaasConfigurationProperties {
private JaasLoginModuleConfiguration kafka;

View File

@@ -46,7 +46,8 @@ import static org.assertj.core.api.Assertions.assertThat;
@TestPropertySource(properties = {
"spring.cloud.stream.kafka.bindings.input.consumer.admin.replication-factor=2",
"spring.cloud.stream.kafka.bindings.input.consumer.admin.replicas-assignments.0=0,1",
"spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0" })
"spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0",
"spring.main.allow-bean-definition-overriding=true"})
@EnableIntegration
public class AdminConfigTests {

View File

@@ -73,7 +73,7 @@ public class KafkaBinderHealthIndicatorTest {
@Test
public void kafkaBinderIsUp() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group1-healthIndicator", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
@@ -82,7 +82,7 @@ public class KafkaBinderHealthIndicatorTest {
@Test
public void kafkaBinderIsDown() {
final List<PartitionInfo> partitions = partitions(new Node(-1, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group2-healthIndicator", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
@@ -91,7 +91,7 @@ public class KafkaBinderHealthIndicatorTest {
@Test(timeout = 5000)
public void kafkaBinderDoesNotAnswer() {
final List<PartitionInfo> partitions = partitions(new Node(-1, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group3-healthIndicator", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willAnswer(new Answer<Object>() {
@Override
@@ -110,7 +110,7 @@ public class KafkaBinderHealthIndicatorTest {
@Test
public void createsConsumerOnceWhenInvokedMultipleTimes() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group4-healthIndicator", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
indicator.health();

View File

@@ -84,11 +84,11 @@ public class KafkaBinderMetricsTest {
public void shouldIndicateLag() {
org.mockito.BDDMockito.given(consumer.committed(ArgumentMatchers.any(TopicPartition.class))).willReturn(new OffsetAndMetadata(500));
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group1-metrics", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
metrics.bindTo(meterRegistry);
assertThat(meterRegistry.getMeters()).hasSize(1);
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group").tag("topic", TEST_TOPIC).timeGauge()
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group1-metrics").tag("topic", TEST_TOPIC).timeGauge()
.value(TimeUnit.MILLISECONDS)).isEqualTo(500.0);
}
@@ -100,22 +100,22 @@ public class KafkaBinderMetricsTest {
org.mockito.BDDMockito.given(consumer.endOffsets(ArgumentMatchers.anyCollection())).willReturn(endOffsets);
org.mockito.BDDMockito.given(consumer.committed(ArgumentMatchers.any(TopicPartition.class))).willReturn(new OffsetAndMetadata(500));
List<PartitionInfo> partitions = partitions(new Node(0, null, 0), new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group2-metrics", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
metrics.bindTo(meterRegistry);
assertThat(meterRegistry.getMeters()).hasSize(1);
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group").tag("topic", TEST_TOPIC).timeGauge()
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group2-metrics").tag("topic", TEST_TOPIC).timeGauge()
.value(TimeUnit.MILLISECONDS)).isEqualTo(1000.0);
}
@Test
public void shouldIndicateFullLagForNotCommittedGroups() {
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group3-metrics", partitions));
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
metrics.bindTo(meterRegistry);
assertThat(meterRegistry.getMeters()).hasSize(1);
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group").tag("topic", TEST_TOPIC).timeGauge()
assertThat(meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group3-metrics").tag("topic", TEST_TOPIC).timeGauge()
.value(TimeUnit.MILLISECONDS)).isEqualTo(1000.0);
}
@@ -130,11 +130,11 @@ public class KafkaBinderMetricsTest {
@Test
public void createsConsumerOnceWhenInvokedMultipleTimes() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group4-metrics", partitions));
metrics.bindTo(meterRegistry);
TimeGauge gauge = meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group").tag("topic", TEST_TOPIC).timeGauge();
TimeGauge gauge = meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group4-metrics").tag("topic", TEST_TOPIC).timeGauge();
gauge.value(TimeUnit.MILLISECONDS);
assertThat(gauge.value(TimeUnit.MILLISECONDS)).isEqualTo(1000.0);
@@ -147,11 +147,11 @@ public class KafkaBinderMetricsTest {
.willReturn(consumer);
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group5-metrics", partitions));
metrics.bindTo(meterRegistry);
TimeGauge gauge = meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group").tag("topic", TEST_TOPIC).timeGauge();
TimeGauge gauge = meterRegistry.get(KafkaBinderMetrics.METRIC_NAME).tag("group", "group5-metrics").tag("topic", TEST_TOPIC).timeGauge();
assertThat(gauge.value(TimeUnit.MILLISECONDS)).isEqualTo(0);
assertThat(gauge.value(TimeUnit.MILLISECONDS)).isEqualTo(1000.0);

View File

@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
@@ -30,6 +31,7 @@ import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
import com.fasterxml.jackson.databind.ObjectMapper;
@@ -48,6 +50,7 @@ import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.record.TimestampType;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.common.serialization.Deserializer;
@@ -73,6 +76,7 @@ import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.PartitionCapableBinderTests;
import org.springframework.cloud.stream.binder.PartitionTestSupport;
import org.springframework.cloud.stream.binder.PollableSource;
import org.springframework.cloud.stream.binder.RequeueCurrentMessageException;
import org.springframework.cloud.stream.binder.Spy;
import org.springframework.cloud.stream.binder.TestUtils;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
@@ -80,6 +84,7 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerPro
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.binding.MessageConverterConfigurer.PartitioningInterceptor;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.provisioning.ProvisioningException;
import org.springframework.context.ApplicationContext;
@@ -100,8 +105,8 @@ import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.MessageListenerContainer;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
@@ -109,7 +114,7 @@ import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionInitialOffset;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
@@ -118,11 +123,11 @@ import org.springframework.messaging.MessageHandlingException;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.MessagingException;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.support.ChannelInterceptor;
import org.springframework.messaging.support.ErrorMessage;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
@@ -149,7 +154,7 @@ public class KafkaBinderTests extends
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class.getSimpleName();
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10, "error.pollableDlq.group");
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10, "error.pollableDlq.group-pcWithDlq");
private KafkaTestBinder binder;
@@ -209,7 +214,7 @@ public class KafkaBinderTests extends
private KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties(
new TestKafkaProperties());
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka().getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -223,7 +228,7 @@ public class KafkaBinderTests extends
return consumerFactory().createConsumer().partitionsFor(topic).size();
}
private void invokeCreateTopic(String topic, int partitions, int replicationFactor) throws Throwable {
private void invokeCreateTopic(String topic, int partitions, int replicationFactor) throws Exception {
NewTopic newTopic = new NewTopic(topic, partitions,
(short) replicationFactor);
@@ -242,7 +247,7 @@ public class KafkaBinderTests extends
timeoutMultiplier = Double.parseDouble(multiplier);
}
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
BrokerAddress[] brokerAddresses = embeddedKafka.getEmbeddedKafka().getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
@@ -334,9 +339,9 @@ public class KafkaBinderTests extends
Assertions.assertThat(inboundMessageRef.get().getHeaders().get(BinderHeaders.BINDER_ORIGINAL_CONTENT_TYPE)).isNull();
Assertions.assertThat(inboundMessageRef.get().getHeaders().get(MessageHeaders.CONTENT_TYPE))
.isEqualTo(MimeTypeUtils.TEXT_PLAIN);
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo")).isInstanceOf(MimeType.class);
MimeType actual = (MimeType) inboundMessageRef.get().getHeaders().get("foo");
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN);
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo")).isInstanceOf(String.class);
String actual = (String) inboundMessageRef.get().getHeaders().get("foo");
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN.toString());
producerBinding.unbind();
consumerBinding.unbind();
}
@@ -493,7 +498,7 @@ public class KafkaBinderTests extends
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo("foo.bar".getBytes(StandardCharsets.UTF_8));
assertThat(new String((byte[]) receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_MESSAGE)))
.startsWith("failed to send Message to channel 'input'");
.startsWith("Dispatcher failed to deliver Message; nested exception is java.lang.RuntimeException: fail");
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_STACKTRACE))
.isNotNull();
binderBindUnbindLatency();
@@ -660,21 +665,51 @@ public class KafkaBinderTests extends
assertThat(receivedMessage.getPayload()).isEqualTo(testMessagePayload.getBytes());
if (HeaderMode.embeddedHeaders.equals(headerMode)) {
assertThat(handler.getInvocationCount()).isEqualTo(consumerProperties.getMaxAttempts());
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo(producerName);
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_PARTITION)).isEqualTo(0);
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_OFFSET))
.isEqualTo(0);
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TIMESTAMP)).isNotNull();
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TIMESTAMP_TYPE))
.isEqualTo(TimestampType.CREATE_TIME.toString());
assertThat(((String) receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_MESSAGE)))
.startsWith("failed to send Message to channel 'input'");
.startsWith("Dispatcher failed to deliver Message; nested exception is java.lang.RuntimeException: fail");
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_STACKTRACE))
.isNotNull();
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_FQCN)).isNotNull();
}
else if (!HeaderMode.none.equals(headerMode)) {
assertThat(handler.getInvocationCount()).isEqualTo(consumerProperties.getMaxAttempts());
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC))
.isEqualTo(producerName.getBytes(StandardCharsets.UTF_8));
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_PARTITION))
.isEqualTo(ByteBuffer.allocate(Integer.BYTES).putInt(0).array());
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_OFFSET))
.isEqualTo(ByteBuffer.allocate(Long.BYTES).putLong(0).array());
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TIMESTAMP)).isNotNull();
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TIMESTAMP_TYPE))
.isEqualTo(TimestampType.CREATE_TIME.toString().getBytes());
assertThat(new String((byte[]) receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_MESSAGE)))
.startsWith("failed to send Message to channel 'input'");
.startsWith("Dispatcher failed to deliver Message; nested exception is java.lang.RuntimeException: fail");
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_STACKTRACE))
.isNotNull();
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_EXCEPTION_FQCN)).isNotNull();
}
else {
assertThat(receivedMessage.getHeaders().get(KafkaMessageChannelBinder.X_ORIGINAL_TOPIC)).isNull();
@@ -1132,7 +1167,7 @@ public class KafkaBinderTests extends
AbstractMessageListenerContainer<?, ?> container = TestUtils.getPropertyValue(consumerBinding,
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
assertThat(container.getContainerProperties().getAckMode()).isEqualTo(AckMode.BATCH);
assertThat(container.getContainerProperties().getAckMode()).isEqualTo(ContainerProperties.AckMode.BATCH);
String testPayload1 = "foo" + UUID.randomUUID().toString();
Message<?> message1 = org.springframework.integration.support.MessageBuilder.withPayload(
@@ -1179,7 +1214,7 @@ public class KafkaBinderTests extends
AbstractMessageListenerContainer<?, ?> container = TestUtils.getPropertyValue(consumerBinding2,
"lifecycle.messageListenerContainer", AbstractMessageListenerContainer.class);
assertThat(container.getContainerProperties().getAckMode()).isEqualTo(AckMode.RECORD);
assertThat(container.getContainerProperties().getAckMode()).isEqualTo(ContainerProperties.AckMode.RECORD);
Message<?> receivedMessage1 = receive(inbound1);
assertThat(receivedMessage1).isNotNull();
@@ -1221,6 +1256,7 @@ public class KafkaBinderTests extends
producerProperties.setPartitionKeyExpression(spelExpressionParser.parseExpression("payload"));
producerProperties.setPartitionSelectorExpression(spelExpressionParser.parseExpression("hashCode()"));
producerProperties.setPartitionCount(3);
invokeCreateTopic("output", 6, 1);
DirectChannel output = createBindableChannel("output", createProducerBindingProperties(producerProperties));
output.setBeanName("test.output");
@@ -1232,7 +1268,14 @@ public class KafkaBinderTests extends
}
catch (UnsupportedOperationException ignored) {
}
List<ChannelInterceptor> interceptors = output.getChannelInterceptors();
AtomicInteger count = new AtomicInteger();
interceptors.forEach(interceptor -> {
if (interceptor instanceof PartitioningInterceptor) {
count.set(TestUtils.getPropertyValue(interceptor, "partitionHandler.partitionCount", Integer.class));
}
});
assertThat(count.get()).isEqualTo(6);
Message<Integer> message2 = org.springframework.integration.support.MessageBuilder.withPayload(2)
.setHeader(IntegrationMessageHeaderAccessor.CORRELATION_ID, "foo")
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER, 42)
@@ -2423,7 +2466,7 @@ public class KafkaBinderTests extends
assertThat(inbound.getHeaders().get(BinderHeaders.NATIVE_HEADERS_PRESENT)).isNull();
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testSendAndReceiveWithMixedMode", "false",
embeddedKafka);
embeddedKafka.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
@@ -2460,9 +2503,9 @@ public class KafkaBinderTests extends
PollableSource<MessageHandler> inboundBindTarget = new DefaultPollableMessageSource(this.messageConverter);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProps = createConsumerProperties();
consumerProps.setMultiplex(true);
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer("pollable,anotherOne", "group",
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer("pollable,anotherOne", "group-polledConsumer",
inboundBindTarget, consumerProps);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollable", "testPollable");
boolean polled = inboundBindTarget.poll(m -> {
@@ -2492,6 +2535,38 @@ public class KafkaBinderTests extends
binding.unbind();
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
public void testPolledConsumerRequeue() throws Exception {
KafkaTestBinder binder = getBinder();
PollableSource<MessageHandler> inboundBindTarget = new DefaultPollableMessageSource(this.messageConverter);
ExtendedConsumerProperties<KafkaConsumerProperties> properties = createConsumerProperties();
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer("pollableRequeue", "group",
inboundBindTarget, properties);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollableRequeue", "testPollable");
try {
boolean polled = false;
int n = 0;
while (n++ < 100 && !polled) {
polled = inboundBindTarget.poll(m -> {
assertThat(m.getPayload()).isEqualTo("testPollable".getBytes());
throw new RequeueCurrentMessageException();
});
}
fail("Expected exception");
}
catch (MessageHandlingException e) {
assertThat(e.getCause()).isInstanceOf(RequeueCurrentMessageException.class);
}
boolean polled = inboundBindTarget.poll(m -> {
assertThat(m.getPayload()).isEqualTo("testPollable".getBytes());
});
assertThat(polled).isTrue();
binding.unbind();
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
public void testPolledConsumerWithDlq() throws Exception {
@@ -2501,8 +2576,8 @@ public class KafkaBinderTests extends
properties.setMaxAttempts(2);
properties.setBackOffInitialInterval(0);
properties.getExtension().setEnableDlq(true);
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer("pollableDlq", "group",
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
Binding<PollableSource<MessageHandler>> binding = binder.bindPollableConsumer("pollableDlq", "group-pcWithDlq",
inboundBindTarget, properties);
KafkaTemplate template = new KafkaTemplate(new DefaultKafkaProducerFactory<>(producerProps));
template.send("pollableDlq", "testPollableDLQ");
@@ -2518,12 +2593,12 @@ public class KafkaBinderTests extends
catch (MessageHandlingException e) {
assertThat(e.getCause().getMessage()).isEqualTo("test DLQ");
}
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false", embeddedKafka);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("dlq", "false", embeddedKafka.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory cf = new DefaultKafkaConsumerFactory<>(consumerProps);
Consumer consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "error.pollableDlq.group");
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer, "error.pollableDlq.group");
embeddedKafka.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer, "error.pollableDlq.group-pcWithDlq");
ConsumerRecord deadLetter = KafkaTestUtils.getSingleRecord(consumer, "error.pollableDlq.group-pcWithDlq");
assertThat(deadLetter).isNotNull();
assertThat(deadLetter.value()).isEqualTo("testPollableDLQ");
binding.unbind();
@@ -2534,7 +2609,7 @@ public class KafkaBinderTests extends
@Test
public void testTopicPatterns() throws Exception {
try (AdminClient admin = AdminClient.create(Collections.singletonMap(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
embeddedKafka.getBrokersAsString()))) {
embeddedKafka.getEmbeddedKafka().getBrokersAsString()))) {
admin.createTopics(Collections.singletonList(new NewTopic("topicPatterns.1", 1, (short) 1))).all().get();
Binder binder = getBinder();
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
@@ -2549,7 +2624,7 @@ public class KafkaBinderTests extends
Binding<MessageChannel> consumerBinding = binder.bindConsumer("topicPatterns\\..*",
"testTopicPatterns", moduleInputChannel, consumerProperties);
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(
KafkaTestUtils.producerProps(embeddedKafka));
KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka()));
KafkaTemplate template = new KafkaTemplate(pf);
template.send("topicPatterns.1", "foo");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();

View File

@@ -17,6 +17,7 @@
package org.springframework.cloud.stream.binder.kafka;
import java.lang.reflect.Method;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
@@ -36,6 +37,7 @@ import org.apache.kafka.common.TopicPartition;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
@@ -82,25 +84,25 @@ public class KafkaBinderUnitTests {
method.setAccessible(true);
// test default for anon
Object factory = method.invoke(binder, true, "foo", ecp);
Object factory = method.invoke(binder, true, "foo-1", ecp);
Map<?, ?> configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("latest");
// test default for named
factory = method.invoke(binder, false, "foo", ecp);
factory = method.invoke(binder, false, "foo-2", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("earliest");
// binder level setting
binderConfigurationProperties.setConfiguration(
Collections.singletonMap(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest"));
factory = method.invoke(binder, false, "foo", ecp);
factory = method.invoke(binder, false, "foo-3", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("latest");
// consumer level setting
consumerProps.setConfiguration(Collections.singletonMap(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"));
factory = method.invoke(binder, false, "foo", ecp);
factory = method.invoke(binder, false, "foo-4", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("earliest");
}
@@ -131,38 +133,38 @@ public class KafkaBinderUnitTests {
@Test
public void testOffsetResetWithGroupManagementEarliest() throws Exception {
testOffsetResetWithGroupManagement(true, true);
testOffsetResetWithGroupManagement(true, true, "foo-100", "testOffsetResetWithGroupManagementEarliest");
}
@Test
public void testOffsetResetWithGroupManagementLatest() throws Throwable {
testOffsetResetWithGroupManagement(false, true);
testOffsetResetWithGroupManagement(false, true, "foo-101", "testOffsetResetWithGroupManagementLatest");
}
@Test
public void testOffsetResetWithManualAssignmentEarliest() throws Exception {
testOffsetResetWithGroupManagement(true, false);
testOffsetResetWithGroupManagement(true, false, "foo-102", "testOffsetResetWithManualAssignmentEarliest");
}
@Test
public void testOffsetResetWithGroupManualAssignmentLatest() throws Throwable {
testOffsetResetWithGroupManagement(false, false);
testOffsetResetWithGroupManagement(false, false, "foo-103", "testOffsetResetWithGroupManualAssignmentLatest");
}
private void testOffsetResetWithGroupManagement(final boolean earliest, boolean groupManage) throws Exception {
private void testOffsetResetWithGroupManagement(final boolean earliest, boolean groupManage, String topic, String group) throws Exception {
final List<TopicPartition> partitions = new ArrayList<>();
partitions.add(new TopicPartition("foo", 0));
partitions.add(new TopicPartition("foo", 1));
partitions.add(new TopicPartition(topic, 0));
partitions.add(new TopicPartition(topic, 1));
KafkaBinderConfigurationProperties configurationProperties = new KafkaBinderConfigurationProperties(
new TestKafkaProperties());
KafkaTopicProvisioner provisioningProvider = mock(KafkaTopicProvisioner.class);
ConsumerDestination dest = mock(ConsumerDestination.class);
given(dest.getName()).willReturn("foo");
given(dest.getName()).willReturn(topic);
given(provisioningProvider.provisionConsumerDestination(anyString(), anyString(), any())).willReturn(dest);
final AtomicInteger part = new AtomicInteger();
willAnswer(i -> {
return partitions.stream()
.map(p -> new PartitionInfo("foo", part.getAndIncrement(), null, null, null))
.map(p -> new PartitionInfo(topic, part.getAndIncrement(), null, null, null))
.collect(Collectors.toList());
}).given(provisioningProvider).getPartitionsForTopic(anyInt(), anyBoolean(), any());
@SuppressWarnings("unchecked")
@@ -176,14 +178,14 @@ public class KafkaBinderUnitTests {
Thread.currentThread().interrupt();
}
return new ConsumerRecords<>(Collections.emptyMap());
}).given(consumer).poll(anyLong());
}).given(consumer).poll(any(Duration.class));
willAnswer(i -> {
((org.apache.kafka.clients.consumer.ConsumerRebalanceListener) i.getArgument(1))
.onPartitionsAssigned(partitions);
latch.countDown();
latch.countDown();
return null;
}).given(consumer).subscribe(eq(Collections.singletonList("foo")),
}).given(consumer).subscribe(eq(Collections.singletonList(topic)),
any(org.apache.kafka.clients.consumer.ConsumerRebalanceListener.class));
willAnswer(i -> {
latch.countDown();
@@ -193,7 +195,7 @@ public class KafkaBinderUnitTests {
@Override
protected ConsumerFactory<?, ?> createKafkaConsumerFactory(boolean anonymous, String consumerGroup,
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties) {
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties) {
return new ConsumerFactory<byte[], byte[]>() {
@@ -212,6 +214,11 @@ public class KafkaBinderUnitTests {
return consumer;
}
@Override
public Consumer<byte[], byte[]> createConsumer(String groupId, String clientIdPrefix, String clientIdSuffix) {
return consumer;
}
@Override
public boolean isAutoCommit() {
return false;
@@ -222,7 +229,7 @@ public class KafkaBinderUnitTests {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
earliest ? "earliest" : "latest");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "bar");
props.put(ConsumerConfig.GROUP_ID_CONFIG, group);
return props;
}
@@ -240,7 +247,7 @@ public class KafkaBinderUnitTests {
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = new ExtendedConsumerProperties<KafkaConsumerProperties>(
extension);
consumerProperties.setInstanceCount(1);
binder.bindConsumer("foo", "bar", channel, consumerProperties);
Binding<MessageChannel> messageChannelBinding = binder.bindConsumer(topic, group, channel, consumerProperties);
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
if (groupManage) {
if (earliest) {
@@ -260,7 +267,7 @@ public class KafkaBinderUnitTests {
verify(consumer).seek(partitions.get(1), Long.MAX_VALUE);
}
}
messageChannelBinding.unbind();
}
}

View File

@@ -33,11 +33,13 @@ import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerPro
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.test.util.TestUtils;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.retry.support.RetryTemplate;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.BDDMockito.willReturn;
@@ -53,16 +55,17 @@ import static org.mockito.Mockito.spy;
public class KafkaTransactionTests {
@ClassRule
public static final KafkaEmbedded embeddedKafka = new KafkaEmbedded(1);
public static final EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1);
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
public void testProducerRunsInTx() {
KafkaProperties kafkaProperties = new TestKafkaProperties();
kafkaProperties.setBootstrapServers(Collections.singletonList(embeddedKafka.getBrokersAsString()));
kafkaProperties.setBootstrapServers(Collections.singletonList(embeddedKafka.getEmbeddedKafka().getBrokersAsString()));
KafkaBinderConfigurationProperties configurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
configurationProperties.getTransaction().setTransactionIdPrefix("foo-");
configurationProperties.getTransaction().getProducer().setUseNativeEncoding(true);
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(configurationProperties, kafkaProperties);
provisioningProvider.setMetadataRetryOperations(new RetryTemplate());
final Producer mockProducer = mock(Producer.class);
@@ -93,6 +96,8 @@ public class KafkaTransactionTests {
inOrder.verify(mockProducer).commitTransaction();
inOrder.verify(mockProducer).close();
inOrder.verifyNoMoreInteractions();
assertThat(TestUtils.getPropertyValue(channel, "dispatcher.theOneHandler.useNativeEncoding", Boolean.class))
.isTrue();
}
}

View File

@@ -23,7 +23,7 @@ import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
/**
* @author Marius Bogoevici
@@ -31,14 +31,14 @@ import org.springframework.kafka.test.rule.KafkaEmbedded;
public class KafkaBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testKafkaBinderConfiguration() throws Exception {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(WebApplicationType.NONE)
.run("--spring.cloud.stream.kafka.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
.run("--spring.cloud.stream.kafka.binder.brokers=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString(),
"--spring.cloud.stream.kafka.binder.zkNodes=" + embeddedKafka.getEmbeddedKafka().getZookeeperConnectionString());
applicationContext.close();
}

View File

@@ -21,7 +21,6 @@ import java.util.Map;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.binder.MeterBinder;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -43,7 +42,7 @@ import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.test.context.junit4.SpringRunner;
@@ -61,16 +60,16 @@ import static org.assertj.core.api.Assertions.assertThat;
properties = "spring.cloud.stream.bindings.input.group=" + KafkaBinderActuatorTests.TEST_CONSUMER_GROUP)
public class KafkaBinderActuatorTests {
static final String TEST_CONSUMER_GROUP = "testGroup";
static final String TEST_CONSUMER_GROUP = "testGroup-actuatorTests";
private static final String KAFKA_BROKERS_PROPERTY = "spring.kafka.bootstrap-servers";
@ClassRule
public static KafkaEmbedded kafkaEmbedded = new KafkaEmbedded(1, true);
public static EmbeddedKafkaRule kafkaEmbedded = new EmbeddedKafkaRule(1, true);
@BeforeClass
public static void setup() {
System.setProperty(KAFKA_BROKERS_PROPERTY, kafkaEmbedded.getBrokersAsString());
System.setProperty(KAFKA_BROKERS_PROPERTY, kafkaEmbedded.getEmbeddedKafka().getBrokersAsString());
}
@AfterClass

View File

@@ -0,0 +1,155 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.integration;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.messaging.Processor;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
properties = {"spring.cloud.stream.kafka.bindings.output.producer.configuration.key.serializer=FooSerializer.class",
"spring.cloud.stream.kafka.default.producer.configuration.key.serializer=BarSerializer.class",
"spring.cloud.stream.kafka.default.producer.configuration.value.serializer=BarSerializer.class",
"spring.cloud.stream.kafka.bindings.input.consumer.configuration.key.serializer=FooSerializer.class",
"spring.cloud.stream.kafka.default.consumer.configuration.key.serializer=BarSerializer.class",
"spring.cloud.stream.kafka.default.consumer.configuration.value.serializer=BarSerializer.class",
"spring.cloud.stream.kafka.default.producer.configuration.foo=bar",
"spring.cloud.stream.kafka.bindings.output.producer.configuration.foo=bindingSpecificPropertyShouldWinOverDefault"})
public class KafkaBinderExtendedPropertiesTest {
private static final String KAFKA_BROKERS_PROPERTY = "spring.cloud.stream.kafka.binder.brokers";
private static final String ZK_NODE_PROPERTY = "spring.cloud.stream.kafka.binder.zkNodes";
@ClassRule
public static EmbeddedKafkaRule kafkaEmbedded = new EmbeddedKafkaRule(1, true);
@BeforeClass
public static void setup() {
System.setProperty(KAFKA_BROKERS_PROPERTY, kafkaEmbedded.getEmbeddedKafka().getBrokersAsString());
System.setProperty(ZK_NODE_PROPERTY, kafkaEmbedded.getEmbeddedKafka().getZookeeperConnectionString());
}
@AfterClass
public static void clean() {
System.clearProperty(KAFKA_BROKERS_PROPERTY);
System.clearProperty(ZK_NODE_PROPERTY);
}
@Autowired
private ConfigurableApplicationContext context;
@Test
public void testKafkaBinderExtendedProperties() {
BinderFactory binderFactory = context.getBeanFactory().getBean(BinderFactory.class);
Binder<MessageChannel, ? extends ConsumerProperties, ? extends ProducerProperties> kafkaBinder =
binderFactory.getBinder("kafka", MessageChannel.class);
KafkaProducerProperties kafkaProducerProperties =
(KafkaProducerProperties)((ExtendedPropertiesBinder) kafkaBinder).getExtendedProducerProperties("output");
//binding "output" gets FooSerializer defined on the binding itself and BarSerializer through default property.
assertThat(kafkaProducerProperties.getConfiguration().get("key.serializer")).isEqualTo("FooSerializer.class");
assertThat(kafkaProducerProperties.getConfiguration().get("value.serializer")).isEqualTo("BarSerializer.class");
assertThat(kafkaProducerProperties.getConfiguration().get("foo")).isEqualTo("bindingSpecificPropertyShouldWinOverDefault");
KafkaConsumerProperties kafkaConsumerProperties =
(KafkaConsumerProperties)((ExtendedPropertiesBinder) kafkaBinder).getExtendedConsumerProperties("input");
//binding "input" gets FooSerializer defined on the binding itself and BarSerializer through default property.
assertThat(kafkaConsumerProperties.getConfiguration().get("key.serializer")).isEqualTo("FooSerializer.class");
assertThat(kafkaConsumerProperties.getConfiguration().get("value.serializer")).isEqualTo("BarSerializer.class");
KafkaProducerProperties customKafkaProducerProperties =
(KafkaProducerProperties)((ExtendedPropertiesBinder) kafkaBinder).getExtendedProducerProperties("custom-out");
//binding "output" gets BarSerializer and BarSerializer for ker.serializer/value.serializer through default properties.
assertThat(customKafkaProducerProperties.getConfiguration().get("key.serializer")).isEqualTo("BarSerializer.class");
assertThat(customKafkaProducerProperties.getConfiguration().get("value.serializer")).isEqualTo("BarSerializer.class");
//through default properties.
assertThat(customKafkaProducerProperties.getConfiguration().get("foo")).isEqualTo("bar");
KafkaConsumerProperties customKafkaConsumerProperties =
(KafkaConsumerProperties)((ExtendedPropertiesBinder) kafkaBinder).getExtendedConsumerProperties("custom-in");
//binding "input" gets BarSerializer and BarSerializer for ker.serializer/value.serializer through default properties.
assertThat(customKafkaConsumerProperties.getConfiguration().get("key.serializer")).isEqualTo("BarSerializer.class");
assertThat(customKafkaConsumerProperties.getConfiguration().get("value.serializer")).isEqualTo("BarSerializer.class");
}
@EnableBinding(CustomBindingForExtendedPropertyTesting.class)
@EnableAutoConfiguration
public static class KafkaMetricsTestConfig {
@StreamListener(Sink.INPUT)
@SendTo(Processor.OUTPUT)
public String process(String payload) throws InterruptedException {
return payload;
}
@StreamListener("custom-in")
@SendTo("custom-out")
public String processCustom(String payload) throws InterruptedException {
return payload;
}
}
interface CustomBindingForExtendedPropertyTesting extends Processor{
@Output("custom-out")
MessageChannel customOut();
@Input("custom-in")
SubscribableChannel customIn();
}
}