Compare commits

..

108 Commits

Author SHA1 Message Date
Soby Chacko
6903cb0b48 Ignore testCustomAvroSerialization 2019-05-09 18:11:56 -04:00
Soby Chacko
9596ca6418 Ignore testCustomAvroSerialization 2019-05-09 17:59:59 -04:00
Soby Chacko
4f8fab8546 1.3.4.RELEASE
Remove the accidental update of versions to 1.3.5 RELEASE instead of 1.3.4
2019-05-09 17:42:49 -04:00
Soby Chacko
8b106a0d08 1.3.5.RELEASE 2019-05-09 17:37:40 -04:00
Oleg Zhurakousky
b15cbd0a99 Merge pull request #613 from spring-operator/polish-urls-remaining-1.3.x
URL Cleanup
2019-03-26 14:22:45 +01:00
Oleg Zhurakousky
e3996d292d Merge pull request #603 from spring-operator/polish-urls-xml-1.3.x
URL Cleanup
2019-03-26 14:20:15 +01:00
Spring Operator
7f534e7a76 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# HTTP URLs that Could Not Be Fixed
These URLs were unable to be fixed. Please review them to see if they can be manually resolved.

* [ ] http://xslthl.sf.net (301) with 4 occurrences could not be migrated:
   ([https](https://xslthl.sf.net) result AnnotatedConnectException).
* [ ] http://exslt.org/common (404) with 1 occurrences could not be migrated:
   ([https](https://exslt.org/common) result SSLHandshakeException).

# Fixed URLs

## Fixed But Review Recommended
These URLs were fixed, but the https status was not OK. However, the https status was the same as the http request or http redirected to an https URL, so they were migrated. Your review is recommended.

* [ ] http://0.0.0.0:8082 (AnnotatedConnectException) with 2 occurrences migrated to:
  https://0.0.0.0:8082 ([https](https://0.0.0.0:8082) result AnnotatedConnectException).
* [ ] http://compose.docker.io/ (UnknownHostException) with 1 occurrences migrated to:
  https://compose.docker.io/ ([https](https://compose.docker.io/) result UnknownHostException).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ (301) with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/) result 404).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ (301) with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/) result 404).
* [ ] http://docs.spring.io/spring-kafka/reference/html/_reference.html (301) with 1 occurrences migrated to:
  https://docs.spring.io/spring-kafka/reference/html/_reference.html ([https](https://docs.spring.io/spring-kafka/reference/html/_reference.html) result 404).

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://docs.confluent.io/2.0.0/kafka/security.html with 1 occurrences migrated to:
  https://docs.confluent.io/2.0.0/kafka/security.html ([https](https://docs.confluent.io/2.0.0/kafka/security.html) result 200).
* [ ] http://github.com/ with 3 occurrences migrated to:
  https://github.com/ ([https](https://github.com/) result 200).
* [ ] http://kafka.apache.org/090/documentation.html with 2 occurrences migrated to:
  https://kafka.apache.org/090/documentation.html ([https](https://kafka.apache.org/090/documentation.html) result 200).
* [ ] http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html with 1 occurrences migrated to:
  https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html ([https](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html) result 200).
* [ ] http://plugins.jetbrains.com/plugin/6546 with 1 occurrences migrated to:
  https://plugins.jetbrains.com/plugin/6546 ([https](https://plugins.jetbrains.com/plugin/6546) result 301).
* [ ] http://raw.github.com/ with 1 occurrences migrated to:
  https://raw.github.com/ ([https](https://raw.github.com/) result 301).
* [ ] http://eclipse.org with 1 occurrences migrated to:
  https://eclipse.org ([https](https://eclipse.org) result 302).
* [ ] http://eclipse.org/m2e/ with 2 occurrences migrated to:
  https://eclipse.org/m2e/ ([https](https://eclipse.org/m2e/) result 302).
* [ ] http://www.springsource.com/developer/sts with 1 occurrences migrated to:
  https://www.springsource.com/developer/sts ([https](https://www.springsource.com/developer/sts) result 302).

# Ignored
These URLs were intentionally ignored.

* http://docbook.org/ns/docbook with 4 occurrences
* http://docbook.sourceforge.net/xmlns/l10n/1.0 with 2 occurrences
* http://localhost:8082 with 4 occurrences
* http://maven.apache.org/POM/4.0.0 with 1 occurrences
* http://www.w3.org/1999/XSL/Format with 2 occurrences
* http://www.w3.org/1999/XSL/Transform with 7 occurrences
* http://www.w3.org/1999/xlink with 1 occurrences
2019-03-26 03:57:37 -05:00
Spring Operator
2fe4aaa4a7 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# HTTP URLs that Could Not Be Fixed
These URLs were unable to be fixed. Please review them to see if they can be manually resolved.

* [ ] http://xslthl.sf.net (301) with 1 occurrences could not be migrated:
   ([https](https://xslthl.sf.net) result AnnotatedConnectException).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://asciidoctor.org with 1 occurrences migrated to:
  https://asciidoctor.org ([https](https://asciidoctor.org) result 200).
* [ ] http://sourceforge.net/projects/xslthl/ with 14 occurrences migrated to:
  https://sourceforge.net/projects/xslthl/ ([https](https://sourceforge.net/projects/xslthl/) result 200).
* [ ] http://www.w3.org/TR/CSS21/propidx.html with 1 occurrences migrated to:
  https://www.w3.org/TR/CSS21/propidx.html ([https](https://www.w3.org/TR/CSS21/propidx.html) result 200).
* [ ] http://repo.spring.io/libs-milestone-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* [ ] http://repo.spring.io/libs-snapshot-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* [ ] http://repo.spring.io/release with 1 occurrences migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0 with 16 occurrences
* http://www.w3.org/2001/XMLSchema-instance with 8 occurrences
2019-03-26 00:23:20 -05:00
Oleg Zhurakousky
50c17df4ff Merge pull request #578 from spring-operator/polish-urls-apache-license-1.3.x
URL Cleanup
2019-03-25 15:02:15 +01:00
Spring Operator
21a17c763e URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://www.apache.org/licenses/ with 1 occurrences migrated to:
  https://www.apache.org/licenses/ ([https](https://www.apache.org/licenses/) result 200).
* [ ] http://www.apache.org/licenses/LICENSE-2.0 with 57 occurrences migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
2019-03-21 13:23:19 -05:00
Spring Operator
3d237f8017 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed But Review Recommended
These URLs were fixed, but the https status was not OK. However, the https status was the same as the http request or http redirected to an https URL, so they were migrated. Your review is recommended.

* http://packages.confluent.io/maven/ (404) with 2 occurrences migrated to:
  https://packages.confluent.io/maven/ ([https](https://packages.confluent.io/maven/) result 404).

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://docs.spring.io/spring-framework/docs/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-framework/docs/ ([https](https://docs.spring.io/spring-framework/docs/) result 200).
* http://docs.spring.io/spring-shell/docs/current/api/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-shell/docs/current/api/ ([https](https://docs.spring.io/spring-shell/docs/current/api/) result 200).
* http://maven.apache.org/xsd/maven-4.0.0.xsd with 8 occurrences migrated to:
  https://maven.apache.org/xsd/maven-4.0.0.xsd ([https](https://maven.apache.org/xsd/maven-4.0.0.xsd) result 200).
* http://www.apache.org/licenses/LICENSE-2.0 with 2 occurrences migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
* http://projects.spring.io/spring-cloud with 4 occurrences migrated to:
  https://projects.spring.io/spring-cloud ([https](https://projects.spring.io/spring-cloud) result 301).
* http://www.spring.io with 4 occurrences migrated to:
  https://www.spring.io ([https](https://www.spring.io) result 301).
* http://repo.spring.io/libs-milestone-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-release-local with 1 occurrences migrated to:
  https://repo.spring.io/libs-release-local ([https](https://repo.spring.io/libs-release-local) result 302).
* http://repo.spring.io/libs-snapshot-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release with 1 occurrences migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0 with 16 occurrences
* http://www.w3.org/2001/XMLSchema-instance with 8 occurrences
2019-03-20 09:49:51 -04:00
Soby Chacko
ad819fb58f Next version: 1.3.4.BUILD-SNAPSHOT 2018-06-12 17:10:18 -04:00
Soby Chacko
6d5c847c47 1.3.3.RELEASE 2018-06-12 13:22:16 -04:00
Gary Russell
6944ca7212 Fix default partition header expression 2018-06-11 17:13:46 -04:00
Soby Chacko
52d879b2cc KafkaBinderConfigurationProperties duplicate bean
ConfigurationProperties bean provided by Kafka Streams binder
extends from `KafkaBinderConfigurationProperties` used by Kafka binder.
It creates a conflict when autowiring this bean from Kafka binder configuration.
This prevents an application to have both binders in the classpath.
Change the creation of this ConfigurationProperties bean so that it
avoids creating bean using EnbaleConfigurationProperties and then autowiring,
but directly create the Bean using `@Bean`. This prevents the conflict.

Resolves #244
Resolves #315
2018-06-11 17:12:07 -04:00
Gary Russell
e982d60cd3 Add test case for previous commit 2018-04-02 12:20:48 -04:00
Aldo Sinanaj
fe571133fb Fix DLQ and Embedded Headers for Raw Mode
Enhance the error message headers before sending to DLQ only for consumer with embedded header mode
2018-04-02 11:56:20 -04:00
Soby Chacko
4cc183396d Update ScSt version to 1.3.3.BUILD-SNAPSHOT 2018-01-10 15:39:32 -05:00
Soby Chacko
ab87c79f7f 1.3.3.BUILD-SNAPSHOT 2018-01-10 14:47:36 -05:00
Soby Chacko
079bd6f7c4 1.3.2.RELEASE 2018-01-10 14:23:25 -05:00
Soby Chacko
971b78936b Next update version: 1.3.2.BUILD-SNAPSHOT 2017-12-27 12:32:36 -05:00
Soby Chacko
eec0eb7892 Upgrade to 1.3.1.RELEASE 2017-12-27 11:36:47 -05:00
Soby Chacko
d639731c66 Remove wildcard imports 2017-12-26 17:22:27 -05:00
Soby Chacko
49b780717e Backport DLQ releated test changes
backport missing dlq related test configuration changes
from 0.10.2 test binder to 0.10.1 test binder
2017-12-26 17:22:27 -05:00
Aldo Sinanaj
f4fc9442c7 Enhance DLQ messages with failure info
Polishing
2017-11-20 11:16:05 -05:00
Gary Russell
e3460d6fce Update POMs to 1.3.1.BUILD-SNAPSHOT 2017-09-29 16:25:08 -04:00
Gary Russell
29bb8513c0 Update POMs to 1.3.0.RELEASE 2017-09-29 15:57:53 -04:00
Gary Russell
69227166c7 GH-215: Add timeout to health indicator
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/215

* Shutdown the executor.

* Polishing - PR Comments

* Re-interrupt thread.

* More Polishing
2017-09-29 14:47:45 -04:00
Gary Russell
4ff4507741 GH-206: Close Consumer/Producer in provisioning
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/206

Close the consumer and producer after retrieving the current partition count.

**Cherry pick/back port to 0.11 and 1.2.x, 2.0.x**

* Destroy the Producer Factory
2017-09-28 10:51:15 -04:00
Soby Chacko
f2e1b63460 GH-201: Move metrics doc to the main overview doc
Fixes #201
2017-09-27 12:57:19 -04:00
Soby Chacko
73f1ed9523 Separate each sentence in metrics docs 2017-09-26 20:02:05 -04:00
Soby Chacko
dc7662e17d Add missing documentation for windowing properties
Cleanup test

Fix #196
2017-09-26 16:31:42 -04:00
Gary Russell
b76fff31b8 Back to 1.3.0.BUILD-SNAPSHOT 2017-09-13 11:10:07 -04:00
Gary Russell
1f4f0c3858 Update POMs to 1.3.0.RC1; s-c-build to 1.3.5
Also SIK 2.1.2.RELEASE
2017-09-13 09:39:36 -04:00
Soby Chacko
1aecd02404 Kstream binder: producer default Serde changes
Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.

Fix #190
2017-09-12 12:40:42 -04:00
Soby Chacko
6485bd2abd GH-188: KStream Binder Properties
KStream binder: support class for application level properties

Provide commonly used KStream application properties for convenient access at runtime

Fix #188

Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.
2017-09-12 12:40:00 -04:00
Gary Russell
02913cd177 GH-169: Use the Actual Partition Count (Producer)
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/169

If the configured `partitionCount` is less than the physical partition count on an existing
topic, the binder emits this message:

    The `partitionCount` of the producer for topic partJ.0 is 3, smaller than the actual partition count of 8 of the topic.
    The larger number will be used instead.

However, that is not true; the configured partition count is used.

Override the configured partition count with the actual partition count.
2017-08-22 13:34:21 -04:00
Gary Russell
0865602141 GH-62: Remove Tuple Kryo Registrar Wrapper
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/62

No longer needed.
2017-08-22 13:24:51 -04:00
Gary Russell
790b141799 Fixes #181
SCSt-GH-916: Configure Producer Error Channel

Requires: https://github.com/spring-cloud/spring-cloud-stream/pull/1039

Publish send failures to the error channel.

Add docs

Revert to Spring Kafka 1.1.6
2017-08-22 12:56:16 -04:00
Vinicius Carvalho
60e620e36e Set version to 1.3.0.BUILD-SNAPSHOT 2017-07-31 16:00:00 -04:00
Vinicius Carvalho
09d35cd742 Release 1.3.0.M2 2017-07-31 15:57:46 -04:00
Soby Chacko
91aec59342 Documentation for KStream binder
* Documentation for KStream binder

Fix #160

* Addressing PR review comments

* Addressing PR review comments

* Addressing PR review comments
2017-07-31 11:00:20 -04:00
Soby Chacko
7884d59bdc GH-144: Add Kafka Streams Binder
Fix spring-cloud/spring-cloud-stream-binder-kafka#144

Addressing some PR reviews

Remove java 8 lambada expressions from KStreamBoundElementFactory

Initial - add support for serdes per binding

Fixing checkstyle issues
test source 1.8

Convert integration tests to use Java 7

Internal refactoring

Remove payload serde code in KStreamBoundElementFactory and reuse it from core

Addressing PR comments

cleanup around payload deserialization

Update to latest serialization logic

Extract common properites class for KStream producer/consumer

Addressing PR review comments

* Remove redundant dependencies for KStream Binder
2017-07-27 18:36:03 -04:00
Tom Ellis
9134e101df Remove duplicate debug statement.
unless you really really want to make sure users see this :)
2017-07-24 10:53:54 -04:00
Marius Bogoevici
512fd9830e Update Spring Cloud Stream version to 1.3.0.BUILD-SNAPSHOT 2017-07-19 11:08:54 -04:00
Marius Bogoevici
94fcead9d5 Set version to 1.3.0.BUILD-SNAPSHOT 2017-07-19 10:56:00 -04:00
Marius Bogoevici
756fde75ae Release 1.3.0.M1 2017-07-19 10:48:34 -04:00
Marius Bogoevici
6cc0b08dcd Update script - fix snapshot reporting 2017-07-19 10:47:30 -04:00
Marius Bogoevici
5dccbb9690 Remove reference to deleted module
- `spring-cloud-stream-binder-kafka-test-support` was previously
   removed, but it was still added as an unused dependency to the
   project
2017-07-19 10:39:14 -04:00
Marius Bogoevici
f473b3b42f POM structure corrections
- move all intra-project deps to dependency management
- remove redundant overrides of Spring Integration Kafka
2017-07-19 10:23:57 -04:00
Marius Bogoevici
a146255433 Update Spring Cloud Stream version to 1.3.0.M1 2017-07-19 10:10:06 -04:00
Gary Russell
07e15c141e SCSt-GH-913: Error Handling via ErrorChannel
Relates to spring-cloud/spring-cloud-stream#913

Fixes #162

- configure an ErrorMessageSendingRecoverer to send errors to an error channel, whether or not retry is enabled.

Change Test Binder to use a Fully Wired Integration Context

- logging handler subscribed to errorChannel

Rebase; revert s-k to 1.1.x, Kafka to 0.10.1.1

Remove dependency overrides.
2017-07-18 17:39:47 -04:00
Marius Bogoevici
001cc5901e Fixes for consumer and producer property propagation
Fix #142 #129 #156 #162

- Remove conditional configuration for Boot 1.4 support
- Filter properties before creating consumer and producer property sets
- Restore `configuration` as Map<String,String> for fixing Boot binding
- Remove 0.9 tests
2017-07-18 12:13:26 -04:00
Marius Bogoevici
b7b5961f7d Add update version script 2017-07-17 23:40:12 -04:00
Marius Bogoevici
d19c1b7611 Use parent version for spring-cloud-build-tools 2017-07-17 22:37:42 -04:00
Marius Bogoevici
c627cefee0 Remove resetOffsets option
Fix #170
2017-07-16 23:16:10 -04:00
Soby Chacko
3081a3aa44 Add test module for Kafka 0.10.2.1
Fix #161
2017-06-26 15:22:43 -04:00
Ilayaperumal Gopinathan
143b96f79d Allow consumer group.id to be overridden
- This change will allow consumer's group.id to be overridden from possible options (Spring Boot Kafka properties, binder configuration properties etc.,)

Resolves #149
2017-06-21 17:27:16 -04:00
Marius Bogoevici
a0f386f06f Log faulty topic name during validation
Fix #157
2017-06-18 11:21:01 -04:00
Henryk Konsek
9ad04882c8 Added Kafka binder lag metrics.
Fix #152

Metrics should be divided by group ID of the binding.

TopicInformation should carry optional group of the consumer.

Improved tests coverage.

Added lag metric documentation.
2017-06-07 23:55:53 -04:00
Gary Russell
705213efe8 Add Tests for Propery Override Hierarchy 2017-05-26 15:33:52 -04:00
Doug Saus
7355ada461 Fix ConsumerConfig.AUTO_OFFSET_RESET_CONFIG
Move set of ConsumerConfig.AUTO_OFFSET_RESET_CONFIG based to before setting of custom kafka properties.  This allows users to override this behavior via spring.cloud.stream.kafka.binder.configuration

Updated fix to also allow the spring.cloud.stream.kafka.bindings.<channel>.consumer.startOffset value to override the anonymous-consumer-based value if set

Moved setting of auto.offset.reset based on binder configuration below setting of kafka properties so that it has higher preceence.

Trailing Spaces
2017-05-26 15:15:47 -04:00
Soby Chacko
d4aaf78089 Kafka binder test modules refactoring
Confluent repository needed for tests is moved to spring-cloud-stream-binder-kafka-test-support

Following dependencies are moved to spring-cloud-stream-binder-kafka-test-support

 *spring-cloud-stream-schema
 *kafka-avro-serializer
 *kafka-schema-registry

spring-cloud-stream-binder-kafka-test-support is brought into spring-cloud-stream-binder-kafka only in test scope

Fixes #121

Kafka binder test modules restructuring

New module for kafka - 0.10.1.1 integration tests
0.10.0.1 tests extend from 0.10.1.1 base class for tests
Removing the empty spring-cloud-stream-binder-kafka-test-support module
2017-05-26 15:07:59 +05:30
Soby Chacko
53e38902c9 Update to next major release line: 1.3.0.BUILD-SNAPSHOT
Fix #139
2017-05-17 11:24:06 -04:00
Soby Chacko
a7a0a132ea Next update: 1.2.2.BUILD-SNAPSHOT 2017-05-16 16:09:33 -04:00
Soby Chacko
ee4f3935ec 1.2.1.RELEASE 2017-05-16 16:04:22 -04:00
Ilayaperumal Gopinathan
4e0107b4d2 Minor polishing 2017-05-16 22:28:43 +05:30
Soby Chacko
8f8a1e8709 Simplify the logic in UnexpectedPartitionCountHandling classes
Remove the inner interface and classes for UnexpectedPartitionCountHandling
Directly use auto rebalancing flag to control consumer idling

Resolves #135

Adding tests for auto-rebalancing enabled/disabled and under partitioned scenario

correct the logging message for idle consumers
2017-05-16 22:28:43 +05:30
Marius Bogoevici
c68ea8c570 Documentation polishing 2017-05-16 12:10:44 -04:00
Henryk Konsek
61e7936978 Add key expression support to Kafka producer
Fix #134

- Add `messageKeyExpression` producer property.

Added key expression unit test.

Added messageKeyExpression documentation.
2017-05-16 12:06:49 -04:00
Ilayaperumal Gopinathan
5c70e2df43 Binding properties should override boot properties
- This is especially applicable to `group.id` and `auto.offset.reset` which get set when configuring the consumer properties
 - Spring Boot Kafka properties are updated as binder configuration properties which get the least precedence
 - Update test

Resolves #122
2017-05-16 18:01:59 +05:30
Simon Flandergan
f280edc9ce disable parititon sanity check if auto rebalancing
unexpected partitions handling
variable naming
remove unnecessary partition count calculation
revert to fail fast for producer partition count
added missing final modifier
Polishing
fixing checkstyle issues
2017-05-12 15:53:39 -04:00
Ilayaperumal Gopinathan
2c9afde8c6 Remove log format setting in KafakEnvironmentPostProcessor
Resolves #132
2017-05-11 23:37:38 +05:30
Ilayaperumal Gopinathan
6a8c0cd0c6 Fix KafkaHealthIndicator kafka properties
- Override KafkaHealthIndicator's `bootstrap.servers` property only when it is not set already
 - Add test

Resolves #123
2017-04-18 10:52:15 +05:30
Soby Chacko
ec73f2785d Next version: 1.2.1.BUILD-SNAPSHOT 2017-04-04 16:00:17 -04:00
Soby Chacko
8982f896fd Release 1.2.0.RELEASE 2017-04-04 15:28:56 -04:00
Marius Bogoevici
005ec51d8b Improve isolation of Kafka tests
Fix #120
2017-04-04 12:10:36 -04:00
Marius Bogoevici
47aaf29e3e Fix regression on TopicExistsException
Fix #116
2017-03-31 13:30:13 -04:00
Marius Bogoevici
88d4b8eef5 Replace core docs Git reference with relative link
Fix #114
2017-03-29 13:47:23 -04:00
Ilayaperumal Gopinathan
66eb15a8e2 Make Kafka DLQ topic name configurable
- Make it configurable as a Kafka consumer properties
 - Add test to verify the configuration
 - Update doc

Resolves #108

Test fix to use the same partition count for producer/dlq

Fix KafkaTopicProvisioner in case of configurable dlq topic name

 - Update test
2017-03-24 16:59:13 -04:00
Soby Chacko
466400cdb7 Adding test for raw mode with String payload 2017-03-14 18:37:55 -04:00
Marius Bogoevici
bff0a072dc Set version to 1.2.0.BUILD-SNAPSHOT 2017-03-13 19:52:12 -04:00
Marius Bogoevici
7fad2951f7 Release 1.2.0.RC1 2017-03-13 19:51:32 -04:00
Marius Bogoevici
e7c5f750da Use Spring Cloud Build 1.3.1.RELEASE 2017-03-13 19:43:28 -04:00
Gary Russell
1f28adaf4c GH-21: Docs For Replaying Dead-Lettered Messages
Resolves #21

Use BinderHeaders.PARTITION_OVERRIDE
2017-03-13 17:04:48 -04:00
Barry Commins
91bdee65ec Configure the health indicator ConsumerFactory to use binder properties
Fixes #79

Replaced try...finally in KafkaBinderHealthIndicator with try-with-resources

Changed KafkaBinderHealthIndicatorTest for consistency
2017-03-06 12:57:05 -05:00
Ilayaperumal Gopinathan
6a31e9c94f Support KafkaProperties from Spring Boot autoconfiguration
- If KafkaProperties is available from KafkaAutoConfiguration, retrieve the kafka properties and set as `KafkaBinderConfigurationProperties`' configuration property.
This way, these properties are available at the top level and per-binding properties could still override when setting producer/consumer properties during binding operation
 - Add tests to verify the scenarios

Resolves #73

Fix deprecation warning

Remove unused constant

Polishing
2017-03-06 11:32:10 -05:00
Marius Bogoevici
b541fca68f Fix 'spring-cloud-stream-binder-kafka-test-support'
Set scope compile on 'spring-kafka-test' so that the
dependency is transitive.
2017-02-21 19:11:31 -05:00
Marius Bogoevici
9d23e6a8fe Move JLine exclusion to the parent 2017-02-21 17:05:29 -05:00
Marius Bogoevici
89249b233f Set dependencies to snapshots 2017-02-21 16:15:40 -05:00
Marius Bogoevici
1998d5e7e8 Set version to 1.2.0.BUILD-SNAPSHOT 2017-02-21 16:14:42 -05:00
Marius Bogoevici
e1906711a8 Release 1.2.0.M2 2017-02-21 16:13:22 -05:00
Marius Bogoevici
78213c98e8 Update dependencies to milestone version 2017-02-21 16:12:17 -05:00
Marius Bogoevici
c10206d41b Remove JLine as dependency
Fixes #104
2017-02-21 16:01:14 -05:00
Ilayaperumal Gopinathan
73eda0ddd0 Add doc for startOffset usage
- Clarify based on the consumerGroup property

Resolves #48
2017-02-21 12:56:14 -05:00
Marius Bogoevici
5b51d7cce3 Reinstate spring-cloud-stream-binder-kafka-test-support
Fix #102

We've removed it in favour of using `spring-kafka-test`
directly but that is somewhat inconvenient to the end
user. Also, it's still part of the release train BOM
so it's an oversight on our end.
2017-02-20 23:18:40 +05:30
Soby Chacko
2530229fb5 Consolidate Rawmode tests as part of KafkaBinderTests
Enable all version specific Kafka tests to run raw mode tests as well
2017-02-18 11:03:25 -05:00
Gary Russell
70cba0ae03 Fix KafkaTopicProvisioner Inner Classes
- make destination inner classes `static` since they have no dependence on the outer class
- make ctors package-protected - private ctors for inner classes cause the compiler to
  generate a synthetic ctor with an additional synthetic parameter
2017-02-17 21:29:45 -05:00
Ilayaperumal Gopinathan
3eb9493cba Revert "Support Spring Boot KafkaProperties"
This reverts commit 89b75b4734.
2017-02-17 11:58:34 +05:30
Ilayaperumal Gopinathan
89b75b4734 Support Spring Boot KafkaProperties
- If KafkaProperties for the KafkaAutoConfiguration is set, then use those properties for the KafkaMessageChannelBinder
 - For the KafkaProperties that have explicit default, override with the KafkaMessageChannelBinder defaults when the properties are not set by any of the property sources
 - Support the existing Kafka Producer/Consumer properties if they are set
 - Add tests

Resolves #73

Address review comments

 - Add javadoc for deprecated fields
 - Add doc

polishing
2017-02-16 10:48:20 -05:00
Soby Chacko
dc0bc18f37 Provisioning SPI Related Changes
- Separate Kafka topic provisioning from the binder using the SPI provided in core
 - Refactor the common entities needed for Kafka into a new core module
 - Refactor binder code to reflect the SPI changes
 - Make DLQ provisioning match with partition properties of the topic

Fixes #86
Fixes #50

Addressing PR review comments

cleanup - addressing PR comments

Using ProvisioningException

Simpler toString() in provisioner

Fix accidental removal of set metadataOperations in binder/tests

cleanup

Moving metadata operations completely to provisioner
Related refactoring

Removing ProvisioningProvider cast

Fix Usage of BinderHeaders.PARTITION_HEADER
2017-02-14 13:51:54 -05:00
Marius Bogoevici
8fafc2605e Renamed Kafka test artifacts 2017-02-08 18:06:12 -05:00
Soby Chacko
83eca9b734 Setting default Kafka baseline to 0.10.1.1
- Starting 1.2, this is going to be the default Kafka version
 - Swap AdminUtilsOperation implementation for 0.9 and 0.10 (Former using reflection and straight API call for latter)
 - In addition to Kafka, Spring-Kafka and SI Kafka dependencies are updated as well.

Separate test artifact for 0.9.0.1.

 - Pull out 0.9 based tests from the default binder module to this new module.

Separate test artifact for 0.10.0.1.

Fixes #88
Fixes #81
2017-02-06 11:34:11 -05:00
Gary Russell
b2d579b5b6 GH-85: Remove Duplicate Dep. From Pom
Fixes #85
2017-02-03 10:13:22 -05:00
Soby Chacko
c389fa3fa4 Ignore 'TopicExistsException' on creation
Fixes a race condtion when topics are created in a stream
and a TopicExistsException is thrown.

Fixes #83

Explicitly checking for TopicExistsException
Cleanup
2017-01-30 18:39:49 -05:00
Ilayaperumal Gopinathan
50e0a8b30e Set log config for Kafka Binder
- set ZKClient logging level to `ERROR`
  - set the config implementations of `AbstractConfig` to `ERROR` logging level

This resolves #56
This resolves #52

Remove producer/consumer logging config
- Update copyright year
2017-01-23 13:38:52 -05:00
Marius Bogoevici
7daa0f1b72 Remove package-info.java 2017-01-11 22:07:13 -05:00
Marius Bogoevici
f3e38961c5 Set version to 1.2.0.BUILD-SNAPSHOT 2017-01-11 13:47:07 -05:00
103 changed files with 5357 additions and 1402 deletions

View File

@@ -21,7 +21,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -29,7 +29,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -37,7 +37,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -47,7 +47,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -55,7 +55,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,6 +1,6 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -192,7 +192,7 @@
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

2
mvnw vendored
View File

@@ -8,7 +8,7 @@
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an

2
mvnw.cmd vendored
View File

@@ -7,7 +7,7 @@
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an

63
pom.xml
View File

@@ -1,31 +1,45 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.2.0.M1</version>
<version>1.3.4.RELEASE</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>1.3.1.M1</version>
<version>1.3.12.RELEASE</version>
<relativePath />
</parent>
<properties>
<java.version>1.7</java.version>
<kafka.version>0.9.0.1</kafka.version>
<spring-kafka.version>1.0.5.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>2.0.1.RELEASE</spring-integration-kafka.version>
<spring-cloud-stream.version>1.2.0.M1</spring-cloud-stream.version>
<kafka.version>0.10.1.1</kafka.version>
<spring-kafka.version>1.1.6.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>2.1.2.RELEASE</spring-integration-kafka.version>
<spring-cloud-stream.version>1.3.4.RELEASE</spring-cloud-stream.version>
<spring-cloud-build.version>1.3.9.RELEASE</spring-cloud-build.version>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka</module>
<module>spring-cloud-starter-stream-kafka</module>
<module>spring-cloud-stream-binder-kafka-docs</module>
<module>spring-cloud-stream-binder-kafka-0.10-test</module>
</modules>
<module>spring-cloud-stream-binder-kafka-0.10.1-test</module>
<module>spring-cloud-stream-binder-kafka-0.10.2-test</module>
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kstream</module>
</modules>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
@@ -41,6 +55,10 @@
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
@@ -84,9 +102,20 @@
<classifier>test</classifier>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<pluginManagement>
<plugins>
@@ -131,7 +160,7 @@
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build-tools</artifactId>
<version>1.3.1.M1</version>
<version>${spring-cloud-build.version}</version>
</dependency>
</dependencies>
<executions>
@@ -161,7 +190,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -172,7 +201,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -180,7 +209,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -190,7 +219,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -201,7 +230,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -209,7 +238,7 @@
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/libs-release-local</url>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.2.0.M1</version>
<version>1.3.4.RELEASE</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>

View File

@@ -1,104 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.2.0.M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-0.10-test</artifactId>
<description>Spring Cloud Stream Kafka Binder 0.10 Tests</description>
<url>http://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
<!--
Override Kafka dependencies to Kafka 0.10 and supporting Spring Kafka and
Spring Integration Kafka versions
-->
<kafka.version>0.10.0.0</kafka.version>
<spring-kafka.version>1.1.1.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>2.1.0.RELEASE</spring-integration-kafka.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>3.0.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>3.0.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>confluent</id>
<url>http://packages.confluent.io/maven/</url>
</repository>
</repositories>
</project>

View File

@@ -1,53 +0,0 @@
/*
* Copyright 2015-2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.kafka.support.LoggingProducerListener;
import org.springframework.kafka.support.ProducerListener;
/**
* Test support class for {@link KafkaMessageChannelBinder}.
* @author Eric Bottard
* @author Marius Bogoevici
* @author David Turanski
* @author Gary Russell
* @author Soby Chacko
*/
public class Kafka10TestBinder extends AbstractKafkaTestBinder {
public Kafka10TestBinder(KafkaBinderConfigurationProperties binderConfiguration) {
try {
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(binderConfiguration);
binder.setCodec(getCodec());
ProducerListener producerListener = new LoggingProducerListener();
binder.setProducerListener(producerListener);
GenericApplicationContext context = new GenericApplicationContext();
context.refresh();
binder.setApplicationContext(context);
binder.setAdminUtilsOperation(new Kafka10AdminUtilsOperation());
binder.afterPropertiesSet();
this.setBinder(binder);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
}

View File

@@ -0,0 +1,132 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.3.4.RELEASE</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-0.10.1-test</artifactId>
<description>Spring Cloud Stream Kafka Binder 0.10.1 Tests</description>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
<kafka.version>0.10.1.1</kafka.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-jmx</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-0.10.2-test</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>3.1.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>3.1.2</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>confluent</id>
<url>https://packages.confluent.io/maven/</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,13 +16,6 @@
package org.springframework.cloud.stream.binder.kafka;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication;
import kafka.utils.ZKStringSerializer$;
@@ -31,18 +24,21 @@ import org.I0Itec.zkclient.ZkClient;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.assertj.core.api.Assertions;
import org.eclipse.jetty.server.Server;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.Spy;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.kafka.core.ConsumerFactory;
@@ -54,21 +50,27 @@ import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.retry.RetryOperations;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import static org.junit.Assert.assertTrue;
/**
* Integration tests for the {@link KafkaMessageChannelBinder}.
*
* This test specifically tests for the 0.10.1.x version of Kafka.
*
* @author Eric Bottard
* @author Marius Bogoevici
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
*/
public class Kafka10BinderTests extends KafkaBinderTests {
public class Kafka_0_10_1_BinderTests extends Kafka_0_10_2_BinderTests {
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class.getSimpleName();
@@ -77,7 +79,7 @@ public class Kafka10BinderTests extends KafkaBinderTests {
private Kafka10TestBinder binder;
private Kafka10AdminUtilsOperation adminUtilsOperation = new Kafka10AdminUtilsOperation();
private final Kafka10AdminUtilsOperation adminUtilsOperation = new Kafka10AdminUtilsOperation();
@Override
protected void binderBindUnbindLatency() throws InterruptedException {
@@ -88,11 +90,13 @@ public class Kafka10BinderTests extends KafkaBinderTests {
protected Kafka10TestBinder getBinder() {
if (binder == null) {
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
binderConfiguration.setHeaders("dlqTestHeader");
binder = new Kafka10TestBinder(binderConfiguration);
}
return binder;
}
@Override
protected KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
@@ -111,12 +115,6 @@ public class Kafka10BinderTests extends KafkaBinderTests {
return consumerFactory().createConsumer().partitionsFor(topic).size();
}
@Override
@SuppressWarnings("unchecked")
protected void setMetadataRetryOperations(Binder binder, RetryOperations retryOperations) {
((Kafka10TestBinder) binder).getBinder().setMetadataRetryOperations(retryOperations);
}
@Override
protected ZkUtils getZkUtils(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
final ZkClient zkClient = new ZkClient(kafkaBinderConfigurationProperties.getZkConnectionString(),
@@ -182,6 +180,7 @@ public class Kafka10BinderTests extends KafkaBinderTests {
return new DefaultKafkaConsumerFactory<>(props, keyDecoder, valueDecoder);
}
@Ignore
@Test
@SuppressWarnings("unchecked")
public void testCustomAvroSerialization() throws Exception {
@@ -192,7 +191,7 @@ public class Kafka10BinderTests extends KafkaBinderTests {
final ZkUtils zkUtils = new ZkUtils(zkClient, null, false);
Map<String, Object> schemaRegistryProps = new HashMap<>();
schemaRegistryProps.put("kafkastore.connection.url", configurationProperties.getZkConnectionString());
schemaRegistryProps.put("listeners", "http://0.0.0.0:8082");
schemaRegistryProps.put("listeners", "https://0.0.0.0:8082");
schemaRegistryProps.put("port", "8082");
schemaRegistryProps.put("kafkastore.topic", "_schemas");
SchemaRegistryConfig config = new SchemaRegistryConfig(schemaRegistryProps);
@@ -205,7 +204,7 @@ public class Kafka10BinderTests extends KafkaBinderTests {
break;
}
else if (System.currentTimeMillis() > endTime) {
fail("Kafka Schema Registry Server failed to start");
Assertions.fail("Kafka Schema Registry Server failed to start");
}
}
User1 firstOutboundFoo = new User1();
@@ -234,11 +233,11 @@ public class Kafka10BinderTests extends KafkaBinderTests {
binderBindUnbindLatency();
moduleOutputChannel.send(message);
Message<?> inbound = receive(moduleInputChannel);
assertThat(inbound).isNotNull();
Assertions.assertThat(inbound).isNotNull();
assertTrue(message.getPayload() instanceof User1);
User1 receivedUser = (User1) message.getPayload();
assertThat(receivedUser.getName()).isEqualTo(userName1);
assertThat(receivedUser.getFavoriteColor()).isEqualTo(favColor1);
Assertions.assertThat(receivedUser.getName()).isEqualTo(userName1);
Assertions.assertThat(receivedUser.getFavoriteColor()).isEqualTo(favColor1);
producerBinding.unbind();
consumerBinding.unbind();
}

View File

@@ -0,0 +1,126 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.3.4.RELEASE</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-0.10.2-test</artifactId>
<description>Spring Cloud Stream Kafka Binder 0.10.2 Tests</description>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
<kafka.version>0.10.2.1</kafka.version>
<spring-kafka.version>1.2.2.RELEASE</spring-kafka.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-jmx</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>3.2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>3.2.2</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>confluent</id>
<url>https://packages.confluent.io/maven/</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@@ -0,0 +1,86 @@
/*
* Copyright 2015-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.kafka.support.LoggingProducerListener;
import org.springframework.kafka.support.ProducerListener;
/**
* Test support class for {@link KafkaMessageChannelBinder}.
* @author Eric Bottard
* @author Marius Bogoevici
* @author David Turanski
* @author Gary Russell
* @author Soby Chacko
*/
public class Kafka10TestBinder extends AbstractKafkaTestBinder {
@SuppressWarnings({ "rawtypes", "unchecked" })
public Kafka10TestBinder(KafkaBinderConfigurationProperties binderConfiguration) {
try {
AdminUtilsOperation adminUtilsOperation = new Kafka10AdminUtilsOperation();
KafkaTopicProvisioner provisioningProvider =
new KafkaTopicProvisioner(binderConfiguration, adminUtilsOperation);
provisioningProvider.afterPropertiesSet();
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(binderConfiguration,
provisioningProvider) {
/*
* Some tests use multiple instance indexes for the same topic; we need to make
* the error infrastructure beans unique.
*/
@Override
protected String errorsBaseName(ConsumerDestination destination, String group,
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties) {
return super.errorsBaseName(destination, group, consumerProperties) + "-"
+ consumerProperties.getInstanceIndex();
}
};
binder.setCodec(AbstractKafkaTestBinder.getCodec());
ProducerListener producerListener = new LoggingProducerListener();
binder.setProducerListener(producerListener);
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(Config.class);
setApplicationContext(context);
binder.setApplicationContext(context);
binder.afterPropertiesSet();
this.setBinder(binder);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
@Configuration
@EnableIntegration
static class Config {
}
}

View File

@@ -0,0 +1,248 @@
/*
* Copyright 2014-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
import org.I0Itec.zkclient.ZkClient;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.assertj.core.api.Assertions;
import org.eclipse.jetty.server.Server;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.Spy;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.support.MessageBuilder;
import static org.junit.Assert.assertTrue;
/**
* Integration tests for the {@link KafkaMessageChannelBinder}.
*
* This test specifically tests for the 0.10.2.x version of Kafka.
*
* @author Eric Bottard
* @author Marius Bogoevici
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
* @author Gary Russell
*/
public class Kafka_0_10_2_BinderTests extends KafkaBinderTests {
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class.getSimpleName();
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
private Kafka10TestBinder binder;
private final Kafka10AdminUtilsOperation adminUtilsOperation = new Kafka10AdminUtilsOperation();
@Override
protected void binderBindUnbindLatency() throws InterruptedException {
Thread.sleep(500);
}
@Override
protected Kafka10TestBinder getBinder() {
if (binder == null) {
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
binderConfiguration.setHeaders("dlqTestHeader");
binder = new Kafka10TestBinder(binderConfiguration);
}
return binder;
}
@Override
protected KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
}
String[] foo = new String[bAddresses.size()];
binderConfiguration.setBrokers(bAddresses.toArray(foo));
binderConfiguration.setZkNodes(embeddedKafka.getZookeeperConnectionString());
return binderConfiguration;
}
@Override
protected int partitionSize(String topic) {
return consumerFactory().createConsumer().partitionsFor(topic).size();
}
@Override
protected ZkUtils getZkUtils(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
final ZkClient zkClient = new ZkClient(kafkaBinderConfigurationProperties.getZkConnectionString(),
kafkaBinderConfigurationProperties.getZkSessionTimeout(), kafkaBinderConfigurationProperties.getZkConnectionTimeout(),
ZKStringSerializer$.MODULE$);
return new ZkUtils(zkClient, null, false);
}
@Override
protected void invokeCreateTopic(ZkUtils zkUtils, String topic, int partitions, int replicationFactor, Properties topicConfig) {
adminUtilsOperation.invokeCreateTopic(zkUtils, topic, partitions, replicationFactor, new Properties());
}
@Override
protected int invokePartitionSize(String topic, ZkUtils zkUtils) {
return adminUtilsOperation.partitionSize(topic, zkUtils);
}
@Override
public String getKafkaOffsetHeaderKey() {
return KafkaHeaders.OFFSET;
}
@Override
protected Binder getBinder(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
return new Kafka10TestBinder(kafkaBinderConfigurationProperties);
}
@Before
public void init() {
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
if (multiplier != null) {
timeoutMultiplier = Double.parseDouble(multiplier);
}
}
@Override
protected boolean usesExplicitRouting() {
return false;
}
@Override
protected String getClassUnderTestName() {
return CLASS_UNDER_TEST_NAME;
}
@Override
public Spy spyOn(final String name) {
throw new UnsupportedOperationException("'spyOn' is not used by Kafka tests");
}
private ConsumerFactory<byte[], byte[]> consumerFactory() {
Map<String, Object> props = new HashMap<>();
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "TEST-CONSUMER-GROUP");
Deserializer<byte[]> valueDecoder = new ByteArrayDeserializer();
Deserializer<byte[]> keyDecoder = new ByteArrayDeserializer();
return new DefaultKafkaConsumerFactory<>(props, keyDecoder, valueDecoder);
}
@Ignore
@Test
@SuppressWarnings("unchecked")
public void testCustomAvroSerialization() throws Exception {
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
final ZkClient zkClient = new ZkClient(configurationProperties.getZkConnectionString(),
configurationProperties.getZkSessionTimeout(), configurationProperties.getZkConnectionTimeout(),
ZKStringSerializer$.MODULE$);
final ZkUtils zkUtils = new ZkUtils(zkClient, null, false);
Map<String, Object> schemaRegistryProps = new HashMap<>();
schemaRegistryProps.put("kafkastore.connection.url", configurationProperties.getZkConnectionString());
schemaRegistryProps.put("listeners", "https://0.0.0.0:8082");
schemaRegistryProps.put("port", "8082");
schemaRegistryProps.put("kafkastore.topic", "_schemas");
SchemaRegistryConfig config = new SchemaRegistryConfig(schemaRegistryProps);
SchemaRegistryRestApplication app = new SchemaRegistryRestApplication(config);
Server server = app.createServer();
server.start();
long endTime = System.currentTimeMillis() + 5000;
while(true) {
if (server.isRunning()) {
break;
}
else if (System.currentTimeMillis() > endTime) {
Assertions.fail("Kafka Schema Registry Server failed to start");
}
}
User1 firstOutboundFoo = new User1();
String userName1 = "foo-name" + UUID.randomUUID().toString();
String favColor1 = "foo-color" + UUID.randomUUID().toString();
firstOutboundFoo.setName(userName1);
firstOutboundFoo.setFavoriteColor(favColor1);
Message<?> message = MessageBuilder.withPayload(firstOutboundFoo).build();
SubscribableChannel moduleOutputChannel = new DirectChannel();
String testTopicName = "existing" + System.currentTimeMillis();
invokeCreateTopic(zkUtils, testTopicName, 6, 1, new Properties());
configurationProperties.setAutoAddPartitions(true);
Binder binder = getBinder(configurationProperties);
QueueChannel moduleInputChannel = new QueueChannel();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.getExtension().getConfiguration().put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
producerProperties.getExtension().getConfiguration().put("schema.registry.url", "http://localhost:8082");
producerProperties.setUseNativeEncoding(true);
Binding<MessageChannel> producerBinding = binder.bindProducer(testTopicName, moduleOutputChannel, producerProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
consumerProperties.getExtension().getConfiguration().put("value.deserializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer");
consumerProperties.getExtension().getConfiguration().put("schema.registry.url", "http://localhost:8082");
Binding<MessageChannel> consumerBinding = binder.bindConsumer(testTopicName, "test", moduleInputChannel, consumerProperties);
// Let the consumer actually bind to the producer before sending a msg
binderBindUnbindLatency();
moduleOutputChannel.send(message);
Message<?> inbound = receive(moduleInputChannel);
Assertions.assertThat(inbound).isNotNull();
assertTrue(message.getPayload() instanceof User1);
User1 receivedUser = (User1) message.getPayload();
Assertions.assertThat(receivedUser.getName()).isEqualTo(userName1);
Assertions.assertThat(receivedUser.getFavoriteColor()).isEqualTo(favColor1);
producerBinding.unbind();
consumerBinding.unbind();
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -0,0 +1,46 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.3.4.RELEASE</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
</organization>
<properties>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.apache.avro</groupId>
<artifactId>avro-compiler</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</project>

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,7 +24,7 @@ import kafka.utils.ZkUtils;
* API around {@link kafka.admin.AdminUtils} to support
* various versions of Kafka brokers.
*
* Note: Implementations that support Kafka brokers other than 0.9, need to use
* Note: Implementations that support Kafka brokers other than 0.10, need to use
* a possible strategy that involves reflection around {@link kafka.admin.AdminUtils}.
*
* @author Soby Chacko

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,10 +18,8 @@ package org.springframework.cloud.stream.binder.kafka.admin;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.List;
import java.util.Properties;
import kafka.api.PartitionMetadata;
import kafka.utils.ZkUtils;
import org.springframework.util.ClassUtils;
@@ -30,7 +28,7 @@ import org.springframework.util.ReflectionUtils;
/**
* @author Soby Chacko
*/
public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
public class Kafka09AdminUtilsOperation implements AdminUtilsOperation {
private static Class<?> ADMIN_UTIL_CLASS;
@@ -53,10 +51,9 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
addPartitions = m;
}
}
if (addPartitions != null) {
addPartitions.invoke(null, zkUtils, topic, numPartitions,
replicaAssignmentStr, checkBrokerAvailable, null);
replicaAssignmentStr, checkBrokerAvailable);
}
else {
throw new InvocationTargetException(
@@ -69,20 +66,16 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
catch (IllegalAccessException e) {
ReflectionUtils.handleReflectionException(e);
}
}
public short errorCodeFromTopicMetadata(String topic, ZkUtils zkUtils) {
try {
Method fetchTopicMetadataFromZk = ReflectionUtils.findMethod(ADMIN_UTIL_CLASS, "fetchTopicMetadataFromZk", String.class, ZkUtils.class);
Object result = fetchTopicMetadataFromZk.invoke(null, topic, zkUtils);
Class<?> topicMetadataClass = ClassUtils.forName("org.apache.kafka.common.requests.MetadataResponse$TopicMetadata", null);
Method errorCodeMethod = ReflectionUtils.findMethod(topicMetadataClass, "error");
Object obj = errorCodeMethod.invoke(result);
Method code = ReflectionUtils.findMethod(obj.getClass(), "code");
return (short) code.invoke(obj);
Class<?> topicMetadataClass = ClassUtils.forName("kafka.api.TopicMetadata", null);
Method errorCodeMethod = ReflectionUtils.findMethod(topicMetadataClass, "errorCode");
return (short) errorCodeMethod.invoke(result);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("AdminUtils class not found", e);
@@ -94,7 +87,6 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
ReflectionUtils.handleReflectionException(e);
}
return 0;
}
@SuppressWarnings("unchecked")
@@ -102,11 +94,13 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
try {
Method fetchTopicMetadataFromZk = ReflectionUtils.findMethod(ADMIN_UTIL_CLASS, "fetchTopicMetadataFromZk", String.class, ZkUtils.class);
Object result = fetchTopicMetadataFromZk.invoke(null, topic, zkUtils);
Class<?> topicMetadataClass = ClassUtils.forName("org.apache.kafka.common.requests.MetadataResponse$TopicMetadata", null);
Class<?> topicMetadataClass = ClassUtils.forName("kafka.api.TopicMetadata", null);
Method partitionsMetadata = ReflectionUtils.findMethod(topicMetadataClass, "partitionMetadata");
List<PartitionMetadata> foo = (List<PartitionMetadata>) partitionsMetadata.invoke(result);
return foo.size();
Method partitionsMetadata = ReflectionUtils.findMethod(topicMetadataClass, "partitionsMetadata");
scala.collection.Seq<kafka.api.PartitionMetadata> partitionSize =
(scala.collection.Seq<kafka.api.PartitionMetadata>)partitionsMetadata.invoke(result);
return partitionSize.size();
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("AdminUtils class not found", e);
@@ -118,22 +112,23 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
ReflectionUtils.handleReflectionException(e);
}
return 0;
}
public void invokeCreateTopic(ZkUtils zkUtils, String topic, int partitions,
int replicationFactor, Properties topicConfig) {
int replicationFactor, Properties topicConfig) {
try {
Method[] declaredMethods = ADMIN_UTIL_CLASS.getDeclaredMethods();
Method createTopic = null;
for (Method m : declaredMethods) {
if (m.getName().equals("createTopic") && m.getParameterTypes()[m.getParameterTypes().length - 1].getName().endsWith("RackAwareMode")) {
if (m.getName().equals("createTopic")) {
createTopic = m;
break;
}
}
if (createTopic != null) {
createTopic.invoke(null, zkUtils, topic, partitions,
replicationFactor, topicConfig, null);
replicationFactor, topicConfig);
}
else {
throw new InvocationTargetException(
@@ -147,4 +142,4 @@ public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
ReflectionUtils.handleReflectionException(e);
}
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,33 +19,36 @@ package org.springframework.cloud.stream.binder.kafka.admin;
import java.util.Properties;
import kafka.admin.AdminUtils;
import kafka.api.TopicMetadata;
import kafka.utils.ZkUtils;
import org.apache.kafka.common.requests.MetadataResponse;
/**
* @author Soby Chacko
*/
public class Kafka09AdminUtilsOperation implements AdminUtilsOperation {
public class Kafka10AdminUtilsOperation implements AdminUtilsOperation {
public void invokeAddPartitions(ZkUtils zkUtils, String topic, int numPartitions,
String replicaAssignmentStr, boolean checkBrokerAvailable) {
AdminUtils.addPartitions(zkUtils, topic, numPartitions,
replicaAssignmentStr, checkBrokerAvailable);
AdminUtils.addPartitions(zkUtils, topic, numPartitions, replicaAssignmentStr, checkBrokerAvailable, null);
}
public short errorCodeFromTopicMetadata(String topic, ZkUtils zkUtils) {
TopicMetadata topicMetadata = AdminUtils.fetchTopicMetadataFromZk(topic, zkUtils);
return topicMetadata.errorCode();
MetadataResponse.TopicMetadata topicMetadata = AdminUtils.fetchTopicMetadataFromZk(topic, zkUtils);
return topicMetadata.error().code();
}
@SuppressWarnings("unchecked")
public int partitionSize(String topic, ZkUtils zkUtils) {
TopicMetadata topicMetadata = AdminUtils.fetchTopicMetadataFromZk(topic, zkUtils);
return topicMetadata.partitionsMetadata().size();
MetadataResponse.TopicMetadata topicMetadata = AdminUtils.fetchTopicMetadataFromZk(topic, zkUtils);
return topicMetadata.partitionMetadata().size();
}
public void invokeCreateTopic(ZkUtils zkUtils, String topic, int partitions,
int replicationFactor, Properties topicConfig) {
int replicationFactor, Properties topicConfig) {
AdminUtils.createTopic(zkUtils, topic, partitions, replicationFactor,
topicConfig);
topicConfig, null);
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.config;
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2015-2016 the original author or authors.
* Copyright 2015-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,12 +14,20 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.config;
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
@@ -27,17 +35,21 @@ import org.springframework.util.StringUtils;
* @author Ilayaperumal Gopinathan
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
private String[] zkNodes = new String[] {"localhost"};
@Autowired(required = false)
private KafkaProperties kafkaProperties;
private String[] zkNodes = new String[] { "localhost" };
private Map<String, String> configuration = new HashMap<>();
private String defaultZkPort = "2181";
private String[] brokers = new String[] {"localhost"};
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
@@ -77,6 +89,11 @@ public class KafkaBinderConfigurationProperties {
private int queueSize = 8192;
/**
* Time to wait to get partition information in seconds; default 60.
*/
private int healthTimeout = 60;
private JaasLoginModuleConfiguration jaas;
public String getZkConnectionString() {
@@ -217,6 +234,14 @@ public class KafkaBinderConfigurationProperties {
this.minPartitionCount = minPartitionCount;
}
public int getHealthTimeout() {
return this.healthTimeout;
}
public void setHealthTimeout(int healthTimeout) {
this.healthTimeout = healthTimeout;
}
public int getQueueSize() {
return this.queueSize;
}
@@ -257,8 +282,70 @@ public class KafkaBinderConfigurationProperties {
this.configuration = configuration;
}
public Map<String, Object> getConsumerConfiguration() {
Map<String, Object> consumerConfiguration = new HashMap<>();
// If Spring Boot Kafka properties are present, add them with lowest precedence
if (this.kafkaProperties != null) {
consumerConfiguration.putAll(this.kafkaProperties.buildConsumerProperties());
}
// Copy configured binder properties
for (Map.Entry<String, String> configurationEntry : this.configuration.entrySet()) {
if (ConsumerConfig.configNames().contains(configurationEntry.getKey())) {
consumerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
}
}
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(consumerConfiguration.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
consumerConfiguration.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = consumerConfiguration.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) consumerConfiguration
.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
consumerConfiguration.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(consumerConfiguration);
}
public Map<String, Object> getProducerConfiguration() {
Map<String, Object> producerConfiguration = new HashMap<>();
// If Spring Boot Kafka properties are present, add them with lowest precedence
if (this.kafkaProperties != null) {
producerConfiguration.putAll(this.kafkaProperties.buildProducerProperties());
}
// Copy configured binder properties
for (Map.Entry<String, String> configurationEntry : configuration.entrySet()) {
if (ProducerConfig.configNames().contains(configurationEntry.getKey())) {
producerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
}
}
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(producerConfiguration.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
producerConfiguration.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = producerConfiguration.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) producerConfiguration
.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
producerConfiguration.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(producerConfiguration);
}
public JaasLoginModuleConfiguration getJaas() {
return jaas;
return this.jaas;
}
public void setJaas(JaasLoginModuleConfiguration jaas) {

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.properties;
/**
* @author Marius Bogoevici

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,15 +14,18 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
/**
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
*
* <p>Thanks to Laszlo Szabo for providing the initial patch for generic property support.</p>
* <p>
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
* </p>
*/
public class KafkaConsumerProperties {
@@ -32,12 +35,12 @@ public class KafkaConsumerProperties {
private Boolean autoCommitOnError;
private boolean resetOffsets;
private StartOffset startOffset;
private boolean enableDlq;
private String dlqName;
private int recoveryInterval = 5000;
private Map<String, String> configuration = new HashMap<>();
@@ -50,14 +53,6 @@ public class KafkaConsumerProperties {
this.autoCommitOffset = autoCommitOffset;
}
public boolean isResetOffsets() {
return this.resetOffsets;
}
public void setResetOffsets(boolean resetOffsets) {
this.resetOffsets = resetOffsets;
}
public StartOffset getStartOffset() {
return this.startOffset;
}
@@ -99,7 +94,8 @@ public class KafkaConsumerProperties {
}
public enum StartOffset {
earliest(-2L), latest(-1L);
earliest(-2L),
latest(-1L);
private final long referencePoint;
@@ -119,4 +115,12 @@ public class KafkaConsumerProperties {
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
public String getDlqName() {
return dlqName;
}
public void setDlqName(String dlqName) {
this.dlqName = dlqName;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,9 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.properties;
import org.springframework.expression.Expression;
import java.util.HashMap;
import java.util.Map;
@@ -23,6 +25,7 @@ import javax.validation.constraints.NotNull;
/**
* @author Marius Bogoevici
* @author Henryk Konsek
*/
public class KafkaProducerProperties {
@@ -34,6 +37,8 @@ public class KafkaProducerProperties {
private int batchTimeout;
private Expression messageKeyExpression;
private Map<String, String> configuration = new HashMap<>();
public int getBufferSize() {
@@ -69,6 +74,14 @@ public class KafkaProducerProperties {
this.batchTimeout = batchTimeout;
}
public Expression getMessageKeyExpression() {
return messageKeyExpression;
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
this.messageKeyExpression = messageKeyExpression;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}

View File

@@ -0,0 +1,345 @@
/*
* Copyright 2014-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collection;
import java.util.Properties;
import java.util.concurrent.Callable;
import kafka.common.ErrorMapping;
import kafka.utils.ZkUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.security.JaasUtils;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.cloud.stream.provisioning.ProvisioningException;
import org.springframework.cloud.stream.provisioning.ProvisioningProvider;
import org.springframework.retry.RetryCallback;
import org.springframework.retry.RetryContext;
import org.springframework.retry.RetryOperations;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
/**
* Kafka implementation for {@link ProvisioningProvider}
*
* @author Soby Chacko
* @author Gary Russell
* @author Ilayaperumal Gopinathan
* @author Simon Flandergan
*/
public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>,
ExtendedProducerProperties<KafkaProducerProperties>>, InitializingBean {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaBinderConfigurationProperties configurationProperties;
private final AdminUtilsOperation adminUtilsOperation;
private RetryOperations metadataRetryOperations;
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
AdminUtilsOperation adminUtilsOperation) {
this.configurationProperties = kafkaBinderConfigurationProperties;
this.adminUtilsOperation = adminUtilsOperation;
}
/**
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
@Override
public void afterPropertiesSet() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100);
backOffPolicy.setMultiplier(2);
backOffPolicy.setMaxInterval(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
@Override
public ProducerDestination provisionProducerDestination(final String name, ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(name, properties.getPartitionCount(), false);
if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation != null) {
final ZkUtils zkUtils = ZkUtils.apply(this.configurationProperties.getZkConnectionString(),
this.configurationProperties.getZkSessionTimeout(),
this.configurationProperties.getZkConnectionTimeout(),
JaasUtils.isZkSecurityEnabled());
int partitions = adminUtilsOperation.partitionSize(name, zkUtils);
return new KafkaProducerDestination(name, partitions);
}
else {
return new KafkaProducerDestination(name);
}
}
@Override
public ConsumerDestination provisionConsumerDestination(final String name, final String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
KafkaTopicUtils.validateTopicName(name);
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !properties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
if (properties.getInstanceCount() == 0) {
throw new IllegalArgumentException("Instance count cannot be zero");
}
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(name, partitionCount, properties.getExtension().isAutoRebalanceEnabled());
if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation != null) {
final ZkUtils zkUtils = ZkUtils.apply(this.configurationProperties.getZkConnectionString(),
this.configurationProperties.getZkSessionTimeout(),
this.configurationProperties.getZkConnectionTimeout(),
JaasUtils.isZkSecurityEnabled());
int partitions = adminUtilsOperation.partitionSize(name, zkUtils);
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName()) ?
properties.getExtension().getDlqName() : "error." + name + "." + group;
createTopicAndPartitions(dlqTopic, partitions, properties.getExtension().isAutoRebalanceEnabled());
return new KafkaConsumerDestination(name, partitions, dlqTopic);
}
return new KafkaConsumerDestination(name, partitions);
}
return new KafkaConsumerDestination(name);
}
private void createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker) {
if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation != null) {
createTopicAndPartitions(topicName, partitionCount, tolerateLowerPartitionsOnBroker);
}
else if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation == null) {
this.logger.warn("Auto creation of topics is enabled, but Kafka AdminUtils class is not present on the classpath. " +
"No topic will be created by the binder");
}
else if (!this.configurationProperties.isAutoCreateTopics()) {
this.logger.info("Auto creation of topics is disabled.");
}
}
/**
* Creates a Kafka topic if needed, or try to increase its partition count to the
* desired number.
*/
private void createTopicAndPartitions(final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker) {
final ZkUtils zkUtils = ZkUtils.apply(this.configurationProperties.getZkConnectionString(),
this.configurationProperties.getZkSessionTimeout(),
this.configurationProperties.getZkConnectionTimeout(),
JaasUtils.isZkSecurityEnabled());
try {
short errorCode = adminUtilsOperation.errorCodeFromTopicMetadata(topicName, zkUtils);
if (errorCode == ErrorMapping.NoError()) {
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties.isAutoAddPartitions()
? Math.max(this.configurationProperties.getMinPartitionCount(), partitionCount)
: partitionCount;
int partitionSize = adminUtilsOperation.partitionSize(topicName, zkUtils);
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
adminUtilsOperation.invokeAddPartitions(zkUtils, topicName, effectivePartitionCount, null, false);
}
else if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (effectivePartitionCount - partitionSize) + " idle consumers");
}
else {
throw new ProvisioningException("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "Consider either increasing the partition count of the topic or enabling " +
"`autoAddPartitions`");
}
}
}
else if (errorCode == ErrorMapping.UnknownTopicOrPartitionCode()) {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(this.configurationProperties.getMinPartitionCount(),
partitionCount);
this.metadataRetryOperations.execute(new RetryCallback<Object, RuntimeException>() {
@Override
public Object doWithRetry(RetryContext context) throws RuntimeException {
try {
adminUtilsOperation.invokeCreateTopic(zkUtils, topicName, effectivePartitionCount,
configurationProperties.getReplicationFactor(), new Properties());
}
catch (Exception e) {
String exceptionClass = e.getClass().getName();
if (exceptionClass.equals("kafka.common.TopicExistsException")
|| exceptionClass.equals("org.apache.kafka.common.errors.TopicExistsException")) {
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName + ". Topic already exists.");
}
}
else {
throw e;
}
}
return null;
}
});
}
else {
throw new ProvisioningException("Error fetching Kafka topic metadata: ",
ErrorMapping.exceptionFor(errorCode));
}
}
finally {
zkUtils.close();
}
}
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable) {
try {
return this.metadataRetryOperations
.execute(new RetryCallback<Collection<PartitionInfo>, Exception>() {
@Override
public Collection<PartitionInfo> doWithRetry(RetryContext context) throws Exception {
Collection<PartitionInfo> partitions = callable.call();
// do a sanity check on the partition set
int partitionSize = partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ") + "been found instead");
}
}
return partitions;
}
});
}
catch (Exception e) {
this.logger.error("Cannot initialize Binder", e);
throw new BinderException("Cannot initialize binder:", e);
}
}
private static final class KafkaProducerDestination implements ProducerDestination {
private final String producerDestinationName;
private final int partitions;
KafkaProducerDestination(String destinationName) {
this(destinationName, 0);
}
KafkaProducerDestination(String destinationName, Integer partitions) {
this.producerDestinationName = destinationName;
this.partitions = partitions;
}
@Override
public String getName() {
return producerDestinationName;
}
@Override
public String getNameForPartition(int partition) {
return producerDestinationName;
}
@Override
public String toString() {
return "KafkaProducerDestination{" +
"producerDestinationName='" + producerDestinationName + '\'' +
", partitions=" + partitions +
'}';
}
}
private static final class KafkaConsumerDestination implements ConsumerDestination {
private final String consumerDestinationName;
private final int partitions;
private final String dlqName;
KafkaConsumerDestination(String consumerDestinationName) {
this(consumerDestinationName, 0, null);
}
KafkaConsumerDestination(String consumerDestinationName, int partitions) {
this(consumerDestinationName, partitions, null);
}
KafkaConsumerDestination(String consumerDestinationName, Integer partitions, String dlqName) {
this.consumerDestinationName = consumerDestinationName;
this.partitions = partitions;
this.dlqName = dlqName;
}
@Override
public String getName() {
return this.consumerDestinationName;
}
@Override
public String toString() {
return "KafkaConsumerDestination{" +
"consumerDestinationName='" + consumerDestinationName + '\'' +
", partitions=" + partitions +
", dlqName='" + dlqName + '\'' +
'}';
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.utils;
import java.io.UnsupportedEncodingException;
@@ -37,7 +37,8 @@ public final class KafkaTopicUtils {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z') || (b >= '0') && (b <= '9') || (b == '.')
|| (b == '-') || (b == '_'))) {
throw new IllegalArgumentException(
"Topic name can only have ASCII alphanumerics, '.', '_' and '-'");
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '" + topicName
+ "'");
}
}
}

View File

@@ -1,11 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.2.0.M1</version>
<version>1.3.4.RELEASE</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
@@ -71,8 +71,8 @@
<quiet>true</quiet>
<stylesheetfile>${basedir}/src/main/javadoc/spring-javadoc.css</stylesheetfile>
<links>
<link>http://docs.spring.io/spring-framework/docs/${spring.version}/javadoc-api/</link>
<link>http://docs.spring.io/spring-shell/docs/current/api/</link>
<link>https://docs.spring.io/spring-framework/docs/${spring.version}/javadoc-api/</link>
<link>https://docs.spring.io/spring-shell/docs/current/api/</link>
</links>
</configuration>
</execution>

View File

@@ -34,7 +34,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -43,13 +43,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".

View File

@@ -24,7 +24,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -37,6 +37,6 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -0,0 +1,109 @@
[[kafka-dlq-processing]]
== Dead-Letter Topic Processing
Because it can't be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following `spring-boot` application is an example of how to route those messages back to the original topic, but moves them to a third "parking lot" topic after three attempts.
The application is simply another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
The examples assume the original destination is `so8400out` and the consumer group is `so8400`.
There are several considerations.
- Consider only running the rerouting when the main application is not running.
Otherwise, the retries for transient errors will be used up very quickly.
- Alternatively, use a two-stage approach - use this application to route to a third topic, and another to route from there back to the main topic.
- Since this technique uses a message header to keep track of retries, it won't work with `headerMode=raw`.
In that case, consider adding some data to the payload (that can be ignored by the main application).
- `x-retries` has to be added to the `headers` property `spring.cloud.stream.kafka.binder.headers=x-retries` on both this, and the main application so that the header is transported between the applications.
- Since kafka is publish/subscribe, replayed messages will be sent to each consumer group, even those that successfully processed a message the first time around.
.application.properties
[source]
----
spring.cloud.stream.bindings.input.group=so8400replay
spring.cloud.stream.bindings.input.destination=error.so8400out.so8400
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.output.producer.partitioned=true
spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot
spring.cloud.stream.bindings.parkingLot.producer.partitioned=true
spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest
spring.cloud.stream.kafka.binder.headers=x-retries
----
.Application
[source, java]
----
@SpringBootApplication
@EnableBinding(TwoOutputProcessor.class)
public class ReRouteDlqKApplication implements CommandLineRunner {
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) {
SpringApplication.run(ReRouteDlqKApplication.class, args).close();
}
private final AtomicInteger processed = new AtomicInteger();
@Autowired
private MessageChannel parkingLot;
@StreamListener(Processor.INPUT)
@SendTo(Processor.OUTPUT)
public Message<?> reRoute(Message<?> failed) {
processed.incrementAndGet();
Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);
if (retries == null) {
System.out.println("First retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else if (retries.intValue() < 3) {
System.out.println("Another retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else {
System.out.println("Retries exhausted for " + failed);
parkingLot.send(MessageBuilder.fromMessage(failed)
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build());
}
return null;
}
@Override
public void run(String... args) throws Exception {
while (true) {
int count = this.processed.get();
Thread.sleep(5000);
if (count == this.processed.get()) {
System.out.println("Idle, terminating");
return;
}
}
}
public interface TwoOutputProcessor extends Processor {
@Output("parkingLot")
MessageChannel parkingLot();
}
}
----

View File

@@ -1,6 +1,6 @@
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek
:doctype: book
:toc:
:toclevels: 4
@@ -11,23 +11,25 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:spring-cloud-stream-binder-kafka-docs: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: http://raw.github.com/{github-repo}/{github-tag}
:github-code: http://github.com/{github-repo}/tree/{github-tag}
:github-wiki: http://github.com/{github-repo}/wiki
:github-master-code: http://github.com/{github-repo}/tree/master
:github-raw: https://raw.github.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:github-wiki: https://github.com/{github-repo}/wiki
:github-master-code: https://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
= Reference Guide
include::overview.adoc[]
include::dlq.adoc[]
= Appendices
[appendix]
include::building.adoc[]
include::contributing.adoc[]
// ======================================================================================

View File

@@ -2,6 +2,7 @@
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage and configuration options, as well as information on how the Stream Cloud Stream concepts map into Apache Kafka specific constructs.
In addition, this guide also explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Usage
@@ -41,7 +42,7 @@ Partitioning also maps directly to Apache Kafka partitions as well.
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, refer to the https://github.com/spring-cloud/spring-cloud-stream/blob/master/spring-cloud-stream-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc#configuration-options[core docs].
For common configuration options and properties pertaining to binder, refer to the <<binding-properties,core documentation>>.
=== Kafka Binder Properties
@@ -72,6 +73,11 @@ spring.cloud.stream.kafka.binder.headers::
The list of custom headers that will be transported by the binder.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information in seconds; default 60.
Health will report as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.offsetUpdateTimeWindow::
The frequency, in milliseconds, with which offsets are saved.
Ignored if `0`.
@@ -146,18 +152,16 @@ recoveryInterval::
The interval between connection recovery attempts, in milliseconds.
+
Default: `5000`.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by `startOffset`.
+
Default: `false`.
startOffset::
The starting offset for new groups, or when `resetOffsets` is `true`.
The starting offset for new groups.
Allowed values: `earliest`, `latest`.
If the consumer group is set explicitly for the consumer 'binding' (via `spring.cloud.stream.bindings.<channelName>.group`), then 'startOffset' is set to `earliest`; otherwise it is set to `latest` for the `anonymous` consumer group.
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it will send enable DLQ behavior for the consumer.
Messages that result in errors will be forwarded to a topic named `error.<destination>.<group>`.
By default, messages that result in errors will be forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable via the property `dlqName`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
+
Default: `false`.
@@ -165,6 +169,10 @@ configuration::
Map with a key/value pair containing generic Kafka consumer properties.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors will be forwarded to a topic named `error.<destination>.<group>`).
=== Kafka Producer Properties
@@ -184,6 +192,11 @@ batchTimeout::
(Normally the producer does not wait at all, and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
For example `headers.key` or `payload.myKey`.
+
Default: `none`.
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
@@ -233,7 +246,7 @@ public class ManuallyAcknowdledgingConsumer {
==== Example: security configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, for setting `security.protocol` to `SASL_SSL`, set:
@@ -245,7 +258,7 @@ spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application using a JAAS configuration file and using Spring Boot properties.
@@ -328,26 +341,18 @@ In secure environments, we strongly recommend creating topics and managing ACLs
==== Using the binder with Apache Kafka 0.10
The binder also supports connecting to Kafka 0.10 brokers.
In order to support this, when you create the project that contains your application, include `spring-cloud-starter-stream-kafka` as you normally would do for 0.9 based applications.
Then add these dependencies at the top of the `<dependencies>` section in the pom.xml file to override the Apache Kafka, Spring Kafka, and Spring Integration Kafka with 0.10-compatible versions as in the following example:
The default Kafka support in Spring Cloud Stream Kafka binder is for Kafka version 0.10.1.1. The binder also supports connecting to other 0.10 based versions and 0.9 clients.
In order to do this, when you create the project that contains your application, include `spring-cloud-starter-stream-kafka` as you normally would do for the default binder.
Then add these dependencies at the top of the `<dependencies>` section in the pom.xml file to override the dependencies.
Here is an example for downgrading your application to 0.10.0.1. Since it is still on the 0.10 line, the default `spring-kafka` and `spring-integration-kafka` versions can be retained.
[source,xml]
----
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>2.1.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.0</version>
<version>0.10.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
@@ -355,6 +360,44 @@ Then add these dependencies at the top of the `<dependencies>` section in the po
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.1</version>
</dependency>
----
Here is another example of using 0.9.0.1 version.
[source,xml]
----
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.0.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>2.0.1.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.1</version>
</dependency>
----
[NOTE]
@@ -369,8 +412,8 @@ For best results, we recommend using the most recent 0.10-compatible versions of
The Apache Kafka Binder uses the administrative utilities which are part of the Apache Kafka server library to create and reconfigure topics.
If the inclusion of the Apache Kafka server library and its dependencies is not necessary at runtime because the application will rely on the topics being configured administratively, the Kafka binder allows for Apache Kafka server dependency to be excluded from the application.
If you use Kafka 10 dependencies as advised above, all you have to do is not to include the kafka broker dependency.
If you use Kafka 0.9, then ensure that you exclude the kafka broker jar from the `spring-cloud-starter-stream-kafka` dependency as following.
If you use non default versions for Kafka dependencies as advised above, all you have to do is not to include the kafka broker dependency.
If you use the default Kafka version, then ensure that you exclude the kafka broker jar from the `spring-cloud-starter-stream-kafka` dependency as following.
[source,xml]
----
@@ -392,4 +435,146 @@ On the other hand, if auto topic creation is disabled on the server, then care m
If you want to have full control over how partitions are allocated, then leave the default settings as they are, i.e. do not exclude the kafka broker jar and ensure that `spring.cloud.stream.kafka.binder.autoCreateTopics` is set to `true`, which is the default.
== Kafka Streams Binding Capabilities of Spring Cloud Stream
Spring Cloud Stream Kafka support also includes a binder specifically designed for Kafka Streams binding.
Using this binder, applications can be written that leverage the Kafka Streams API.
For more information on Kafka Streams, see https://kafka.apache.org/documentation/streams/developer-guide[Kafka Streams API Developer Manual]
Kafka Streams support in Spring Cloud Stream is based on the foundations provided by the Spring Kafka project. For details on that support, see https://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafaka Streams Support in Spring Kafka].
Here are the maven coordinates for the Spring Cloud Stream KStream binder artifact.
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kstream</artifactId>
</dependency>
----
In addition to leveraging the Spring Cloud Stream programming model which is based on Spring Boot, one of the main other benefits that the KStream binder provides is the fact that it avoids the boilerplate configuration that one needs to write when using the Kafka Streams API directly.
High level streams DSL provided through the Kafka Streams API can be used through Spring Cloud Stream in the current support.
=== Usage example of high level streams DSL
This application will listen from a Kafka topic and write the word count for each unique word that it sees in a 5 seconds time window.
[source]
----
@SpringBootApplication
@EnableBinding(KStreamProcessor.class)
public class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<?, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, word) -> new KeyValue<>(word, word))
.groupByKey(Serdes.String(), Serdes.String())
.count(TimeWindows.of(5000), "store-name")
.toStream()
.map((w, c) -> new KeyValue<>(null, "Count for " + w.key() + ": " + c));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
----
If you build it as Spring Boot runnable fat jar, you can run the above example in the following way:
[source]
----
java -jar uber.jar --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts
----
This means that the application will listen from the incoming Kafka topic words and write to the output topic counts.
Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are bound as KStream objects.
As one may observe, the developer can exclusively focus on the business aspects of the code, i.e. writing the logic required in the processor rather than setting up the streams specific configuration required by the Kafka Streams infrastructure.
All those boilerplate is handled by Spring Cloud Stream behind the scenes.
=== Support for interactive queries
If access to the `KafkaStreams` is needed for interactive queries, the internal `KafkaStreams` instance can be accessed via `KStreamBuilderFactoryBean.getKafkaStreams()`.
You can autowire the `KStreamBuilderFactoryBean` instance provided by the KStream binder. Then you can get `KafkaStreams` instance from it and retrieve the underlying store, execute queries on it, etc.
=== Kafka Streams properties
configuration::
Map with a key/value pair containing properties pertaining to Kafka Streams API.
This property must be prefixed with `spring.cloud.stream.kstream.binder.`.
Following are some examples of using this property.
[source]
----
spring.cloud.stream.kstream.binder.configuration.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kstream.binder.configuration.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kstream.binder.configuration.commit.interval.ms=1000
----
For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs.
There can also be binding specific properties.
For instance, you can use a different Serde for your input or output destination.
[source]
----
spring.cloud.stream.kstream.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde
spring.cloud.stream.kstream.bindings.output.producer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde
----
timewindow.length::
Many streaming applications written using Kafka Streams involve windowning operations.
If you specify this property, there is a `org.apache.kafka.streams.kstream.TimeWindows` bean automatically provided that can be autowired in applications.
This property must be prefixed with `spring.cloud.stream.kstream.`.
A bean of type `org.apache.kafka.streams.kstream.TimeWindows` is created only if this property is provided.
Following is an example of using this property.
Values are provided in milliseconds.
[source]
----
spring.cloud.stream.kstream.timeWindow.length=5000
----
timewindow.advanceBy::
This property goes hand in hand with `timewindow.length` and has no effect on its own.
If you provide this property, the generated `org.apache.kafka.streams.kstream.TimeWindows` bean will automatically conatin this information.
This property must be prefixed with `spring.cloud.stream.kstream.`.
Following is an example of using this property.
Values are provided in milliseconds.
[source]
----
spring.cloud.stream.kstream.timeWindow.advanceBy=1000
----
[[kafka-error-channels]]
== Error Channels
Starting with _version 1.3_, the binder unconditionally sends exceptions to an error channel for each consumer destination, and can be configured to send async producer send failures to an error channel too.
See <<binder-error-channels>> for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage` - the spring-messaging `Message<?>` that failed to be sent.
* `record` - the raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of these exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>); you can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.someGroup.someTopic.lag` - this metric indicates how many messages have not been yet consumed from given binder's topic by given consumer group.
For example if the value of the metric `spring.cloud.stream.binder.kafka.myGroup.myTopic.lag` is `1000`, then consumer group `myGroup` has `1000` messages to waiting to be consumed from topic `myTopic`.
This metric is particularly useful to provide auto-scaling feedback to PaaS platform of your choice.

View File

@@ -9,7 +9,7 @@
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@ under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@ under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -22,7 +22,7 @@ under the License.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns:fo="http://www.w3.org/1999/XSL/Format"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:xlink='http://www.w3.org/1999/xlink'
xmlns:exsl="http://exslt.org/common"
exclude-result-prefixes="exsl xslthl d xlink"

View File

@@ -19,5 +19,5 @@
<highlighter id="properties" file="./xslthl/properties-hl.xml" />
<highlighter id="json" file="./xslthl/json-hl.xml" />
<highlighter id="yaml" file="./xslthl/yaml-hl.xml" />
<namespace prefix="xslthl" uri="http://xslthl.sf.net" />
<namespace prefix="xslthl" uri="http://xslthl.sourceforge.net/" />
</xslthl-config>

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for SH
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2010 Mathieu Malaterre
This software is provided 'as-is', without any express or implied

View File

@@ -3,7 +3,7 @@
Syntax highlighting definition for C
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for C++
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for C#
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for CSS files
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2011-2012 Martin Hujer, Michiel Hendriks
This software is provided 'as-is', without any express or implied
@@ -26,7 +26,7 @@ freely, subject to the following restrictions:
Martin Hujer <mhujer at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
Reference: http://www.w3.org/TR/CSS21/propidx.html
Reference: https://www.w3.org/TR/CSS21/propidx.html
-->
<highlighters>

View File

@@ -7,7 +7,7 @@
myxml-hl.xml - konfigurace zvyraznovace XML, ktera zvlast zvyrazni
HTML elementy a XSL elementy
This file has been customized for the Asciidoctor project (http://asciidoctor.org).
This file has been customized for the Asciidoctor project (https://asciidoctor.org).
-->
<highlighters>
<highlighter type="xml">

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for ini files
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for JavaScript
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Perl
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for PHP
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Python
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Ruby
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for SQL:1999
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
https://sourceforge.net/projects/xslthl/
Copyright (C) 2012 Michiel Hendriks, Martin Hujer, k42b3
This software is provided 'as-is', without any express or implied

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
@@ -10,10 +10,14 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.2.0.M1</version>
<version>1.3.4.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
@@ -37,17 +41,6 @@
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.apache.avro</groupId>
<artifactId>avro-compiler</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
@@ -59,12 +52,6 @@
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
@@ -82,6 +69,11 @@
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>

View File

@@ -0,0 +1,68 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.env.EnvironmentPostProcessor;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MapPropertySource;
/**
* An {@link EnvironmentPostProcessor} that sets some common configuration properties (log config etc.,) for Kafka
* binder.
*
* @author Ilayaperumal Gopinathan
*/
public class KafkaBinderEnvironmentPostProcessor implements EnvironmentPostProcessor {
public final static String SPRING_KAFKA = "spring.kafka";
public final static String SPRING_KAFKA_PRODUCER = SPRING_KAFKA + ".producer";
public final static String SPRING_KAFKA_CONSUMER = SPRING_KAFKA + ".consumer";
public final static String SPRING_KAFKA_PRODUCER_KEY_SERIALIZER = SPRING_KAFKA_PRODUCER + "." + "keySerializer";
public final static String SPRING_KAFKA_PRODUCER_VALUE_SERIALIZER = SPRING_KAFKA_PRODUCER + "." + "valueSerializer";
public final static String SPRING_KAFKA_CONSUMER_KEY_DESERIALIZER = SPRING_KAFKA_CONSUMER + "." + "keyDeserializer";
public final static String SPRING_KAFKA_CONSUMER_VALUE_DESERIALIZER = SPRING_KAFKA_CONSUMER + "." + "valueDeserializer";
private static final String KAFKA_BINDER_DEFAULT_PROPERTIES = "kafkaBinderDefaultProperties";
@Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
if (!environment.getPropertySources().contains(KAFKA_BINDER_DEFAULT_PROPERTIES)) {
Map<String, Object> kafkaBinderDefaultProperties = new HashMap<>();
kafkaBinderDefaultProperties.put("logging.level.org.I0Itec.zkclient", "ERROR");
kafkaBinderDefaultProperties.put("logging.level.kafka.server.KafkaConfig", "ERROR");
kafkaBinderDefaultProperties.put("logging.level.kafka.admin.AdminClient.AdminConfig", "ERROR");
kafkaBinderDefaultProperties.put(SPRING_KAFKA_PRODUCER_KEY_SERIALIZER, ByteArraySerializer.class.getName());
kafkaBinderDefaultProperties.put(SPRING_KAFKA_PRODUCER_VALUE_SERIALIZER, ByteArraySerializer.class.getName());
kafkaBinderDefaultProperties.put(SPRING_KAFKA_CONSUMER_KEY_DESERIALIZER, ByteArrayDeserializer.class.getName());
kafkaBinderDefaultProperties.put(SPRING_KAFKA_CONSUMER_VALUE_DESERIALIZER, ByteArrayDeserializer.class.getName());
environment.getPropertySources().addLast(new MapPropertySource(KAFKA_BINDER_DEFAULT_PROPERTIES, kafkaBinderDefaultProperties));
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,70 +16,110 @@
package org.springframework.cloud.stream.binder.kafka;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.kafka.core.ConsumerFactory;
/**
* Health indicator for Kafka.
*
* @author Ilayaperumal Gopinathan
* @author Marius Bogoevici
* @author Henryk Konsek
* @author Gary Russell
*/
public class KafkaBinderHealthIndicator implements HealthIndicator {
private static final int DEFAULT_TIMEOUT = 60;
private final KafkaMessageChannelBinder binder;
private final KafkaBinderConfigurationProperties configurationProperties;
private final ConsumerFactory<?, ?> consumerFactory;
private int timeout = DEFAULT_TIMEOUT;
public KafkaBinderHealthIndicator(KafkaMessageChannelBinder binder,
KafkaBinderConfigurationProperties configurationProperties) {
ConsumerFactory<?, ?> consumerFactory) {
this.binder = binder;
this.configurationProperties = configurationProperties;
this.consumerFactory = consumerFactory;
}
/**
* Set the timeout in seconds to retrieve health information.
* @param timeout the timeout - default 60.
*/
public void setTimeout(int timeout) {
this.timeout = timeout;
}
@Override
public Health health() {
Map<String, String> properties = new HashMap<>();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configurationProperties
.getKafkaConnectionString());
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
KafkaConsumer metadataConsumer = new KafkaConsumer(properties);
try {
Set<String> downMessages = new HashSet<>();
for (String topic : this.binder.getTopicsInUse().keySet()) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
if (this.binder.getTopicsInUse().get(topic).contains(partitionInfo) && partitionInfo.leader()
.id() == -1) {
downMessages.add(partitionInfo.toString());
ExecutorService exec = Executors.newSingleThreadExecutor();
Future<Health> future = exec.submit(new Callable<Health>() {
@Override
public Health call() {
try (Consumer<?, ?> metadataConsumer = consumerFactory.createConsumer()) {
Set<String> downMessages = new HashSet<>();
for (String topic : KafkaBinderHealthIndicator.this.binder.getTopicsInUse().keySet()) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
for (PartitionInfo partitionInfo : partitionInfos) {
if (KafkaBinderHealthIndicator.this.binder.getTopicsInUse().get(topic).getPartitionInfos()
.contains(partitionInfo) && partitionInfo.leader().id() == -1) {
downMessages.add(partitionInfo.toString());
}
}
}
if (downMessages.isEmpty()) {
return Health.up().build();
}
else {
return Health.down()
.withDetail("Following partitions in use have no leaders: ", downMessages.toString())
.build();
}
}
catch (Exception e) {
return Health.down(e).build();
}
}
if (downMessages.isEmpty()) {
return Health.up().build();
}
return Health.down().withDetail("Following partitions in use have no leaders: ", downMessages.toString())
});
try {
return future.get(this.timeout, TimeUnit.SECONDS);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
return Health.down()
.withDetail("Interrupted while waiting for partition information in", this.timeout + " seconds")
.build();
}
catch (Exception e) {
catch (ExecutionException e) {
return Health.down(e).build();
}
catch (TimeoutException e) {
return Health.down()
.withDetail("Failed to retrieve partition information in", this.timeout + " seconds")
.build();
}
finally {
metadataConsumer.close();
exec.shutdownNow();
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -28,7 +28,7 @@ import org.apache.kafka.common.security.JaasUtils;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ApplicationListener;

View File

@@ -0,0 +1,126 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.Collection;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.actuate.endpoint.PublicMetrics;
import org.springframework.boot.actuate.metrics.Metric;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.util.ObjectUtils;
/**
* Metrics for Kafka binder.
*
* @author Henryk Konsek
*/
public class KafkaBinderMetrics implements PublicMetrics {
private final static Logger LOG = LoggerFactory.getLogger(KafkaBinderMetrics.class);
static final String METRIC_PREFIX = "spring.cloud.stream.binder.kafka";
private final KafkaMessageChannelBinder binder;
private final KafkaBinderConfigurationProperties binderConfigurationProperties;
private ConsumerFactory<?, ?> defaultConsumerFactory;
public KafkaBinderMetrics(KafkaMessageChannelBinder binder,
KafkaBinderConfigurationProperties binderConfigurationProperties,
ConsumerFactory<?, ?> defaultConsumerFactory) {
this.binder = binder;
this.binderConfigurationProperties = binderConfigurationProperties;
this.defaultConsumerFactory = defaultConsumerFactory;
}
public KafkaBinderMetrics(KafkaMessageChannelBinder binder,
KafkaBinderConfigurationProperties binderConfigurationProperties) {
this(binder, binderConfigurationProperties, null);
}
@Override
public Collection<Metric<?>> metrics() {
List<Metric<?>> metrics = new LinkedList<>();
for (Map.Entry<String, KafkaMessageChannelBinder.TopicInformation> topicInfo : this.binder.getTopicsInUse()
.entrySet()) {
if (!topicInfo.getValue().isConsumerTopic()) {
continue;
}
String topic = topicInfo.getKey();
String group = topicInfo.getValue().getConsumerGroup();
try (Consumer<?, ?> metadataConsumer = createConsumerFactory(group).createConsumer()) {
List<PartitionInfo> partitionInfos = metadataConsumer.partitionsFor(topic);
List<TopicPartition> topicPartitions = new LinkedList<>();
for (PartitionInfo partitionInfo : partitionInfos) {
topicPartitions.add(new TopicPartition(partitionInfo.topic(), partitionInfo.partition()));
}
Map<TopicPartition, Long> endOffsets = metadataConsumer.endOffsets(topicPartitions);
long lag = 0;
for (Map.Entry<TopicPartition, Long> endOffset : endOffsets.entrySet()) {
OffsetAndMetadata current = metadataConsumer.committed(endOffset.getKey());
if (current != null) {
lag += endOffset.getValue() - current.offset();
}
else {
lag += endOffset.getValue();
}
}
metrics.add(new Metric<>(String.format("%s.%s.%s.lag", METRIC_PREFIX, group, topic), lag));
}
catch (Exception e) {
LOG.debug("Cannot generate metric for topic: " + topic, e);
}
}
return metrics;
}
private ConsumerFactory<?, ?> createConsumerFactory(String group) {
if (defaultConsumerFactory != null) {
return defaultConsumerFactory;
}
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConsumerConfiguration())) {
props.putAll(binderConfigurationProperties.getConsumerConfiguration());
}
if (!props.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
this.binderConfigurationProperties.getKafkaConnectionString());
}
props.put("group.id", group);
return new DefaultKafkaConsumerFactory<>(props);
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2014-2016 the original author or authors.
* Copyright 2014-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,68 +16,73 @@
package org.springframework.cloud.stream.binder.kafka;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import java.util.concurrent.Callable;
import kafka.common.ErrorMapping;
import kafka.utils.ZkUtils;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.security.JaasUtils;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.common.utils.Utils;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.cloud.stream.binder.AbstractMessageChannelBinder;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.BinderHeaders;
import org.springframework.cloud.stream.binder.EmbeddedHeaderUtils;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.MessageValues;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.context.Lifecycle;
import org.springframework.expression.common.LiteralExpression;
import org.springframework.expression.spel.standard.SpelExpressionParser;
import org.springframework.integration.core.MessageProducer;
import org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter;
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
import org.springframework.integration.kafka.support.RawRecordHeaderErrorMessageStrategy;
import org.springframework.integration.support.ErrorMessageStrategy;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ErrorHandler;
import org.springframework.kafka.listener.config.ContainerProperties;
import org.springframework.kafka.support.ProducerListener;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionInitialOffset;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHandler;
import org.springframework.retry.RetryCallback;
import org.springframework.retry.RetryContext;
import org.springframework.retry.RetryOperations;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.messaging.MessagingException;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
/**
* A {@link Binder} that uses Kafka as the underlying middleware.
@@ -89,29 +94,32 @@ import org.springframework.util.StringUtils;
* @author Gary Russell
* @author Mark Fisher
* @author Soby Chacko
* @author Henryk Konsek
* @author Doug Saus
* @author Aldo Sinanaj
*/
public class KafkaMessageChannelBinder extends
AbstractMessageChannelBinder<ExtendedConsumerProperties<KafkaConsumerProperties>,
ExtendedProducerProperties<KafkaProducerProperties>, Collection<PartitionInfo>, String>
implements ExtendedPropertiesBinder<MessageChannel, KafkaConsumerProperties, KafkaProducerProperties>,
DisposableBean {
ExtendedProducerProperties<KafkaProducerProperties>, KafkaTopicProvisioner>
implements ExtendedPropertiesBinder<MessageChannel, KafkaConsumerProperties, KafkaProducerProperties> {
public static final String X_ORIGINAL_TOPIC = "x-original-topic";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
private final KafkaBinderConfigurationProperties configurationProperties;
private RetryOperations metadataRetryOperations;
private final Map<String, Collection<PartitionInfo>> topicsInUse = new HashMap<>();
private final Map<String, TopicInformation> topicsInUse = new HashMap<>();
private ProducerListener<byte[], byte[]> producerListener;
private volatile Producer<byte[], byte[]> dlqProducer;
private KafkaExtendedBindingProperties extendedBindingProperties = new KafkaExtendedBindingProperties();
private AdminUtilsOperation adminUtilsOperation;
public KafkaMessageChannelBinder(KafkaBinderConfigurationProperties configurationProperties) {
super(false, headersToMap(configurationProperties));
public KafkaMessageChannelBinder(KafkaBinderConfigurationProperties configurationProperties,
KafkaTopicProvisioner provisioningProvider) {
super(false, headersToMap(configurationProperties), provisioningProvider);
this.configurationProperties = configurationProperties;
}
@@ -131,55 +139,15 @@ public class KafkaMessageChannelBinder extends
return headersToMap;
}
public void setAdminUtilsOperation(AdminUtilsOperation adminUtilsOperation) {
this.adminUtilsOperation = adminUtilsOperation;
}
/**
* Retry configuration for operations such as validating topic creation
*
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
public void setExtendedBindingProperties(KafkaExtendedBindingProperties extendedBindingProperties) {
this.extendedBindingProperties = extendedBindingProperties;
}
@Override
public void onInit() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100);
backOffPolicy.setMultiplier(2);
backOffPolicy.setMaxInterval(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
@Override
public void destroy() throws Exception {
if (this.dlqProducer != null) {
this.dlqProducer.close();
this.dlqProducer = null;
}
}
public void setProducerListener(ProducerListener<byte[], byte[]> producerListener) {
this.producerListener = producerListener;
}
Map<String, Collection<PartitionInfo>> getTopicsInUse() {
Map<String, TopicInformation> getTopicsInUse() {
return this.topicsInUse;
}
@@ -194,68 +162,81 @@ public class KafkaMessageChannelBinder extends
}
@Override
protected MessageHandler createProducerMessageHandler(final String destination,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties) throws Exception {
protected MessageHandler createProducerMessageHandler(final ProducerDestination destination,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties, MessageChannel errorChannel)
throws Exception {
final DefaultKafkaProducerFactory<byte[], byte[]> producerFB = getProducerFactory(producerProperties);
Collection<PartitionInfo> partitions = provisioningProvider.getPartitionsForTopic(
producerProperties.getPartitionCount(),
false,
new Callable<Collection<PartitionInfo>>() {
KafkaTopicUtils.validateTopicName(destination);
createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(destination, producerProperties.getPartitionCount());
Collection<PartitionInfo> partitions = getPartitionsForTopic(destination, producerProperties.getPartitionCount());
@Override
public Collection<PartitionInfo> call() throws Exception {
Producer<byte[], byte[]> producer = producerFB.createProducer();
List<PartitionInfo> partitionsFor = producer.partitionsFor(destination.getName());
producer.close();
producerFB.destroy();
return partitionsFor;
}
});
this.topicsInUse.put(destination.getName(), new TopicInformation(null, partitions));
if (producerProperties.getPartitionCount() < partitions.size()) {
if (this.logger.isInfoEnabled()) {
this.logger.info("The `partitionCount` of the producer for topic " + destination + " is "
this.logger.info("The `partitionCount` of the producer for topic " + destination.getName() + " is "
+ producerProperties.getPartitionCount() + ", smaller than the actual partition count of "
+ partitions.size() + " of the topic. The larger number will be used instead.");
}
/*
* This is dirty; it relies on the fact that we, and the partition
* interceptor, share a hard reference to the producer properties instance.
* But I don't see another way to fix it since the interceptor has already
* been added to the channel, and we don't have access to the channel here; if
* we did, we could inject the proper partition count there.
* TODO: Consider this when doing the 2.0 binder restructuring.
*/
producerProperties.setPartitionCount(partitions.size());
}
this.topicsInUse.put(destination, partitions);
DefaultKafkaProducerFactory<byte[], byte[]> producerFB = getProducerFactory(producerProperties);
KafkaTemplate<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFB);
if (this.producerListener != null) {
kafkaTemplate.setProducerListener(this.producerListener);
}
return new ProducerConfigurationMessageHandler(kafkaTemplate, destination, producerProperties, producerFB);
}
@Override
protected String createProducerDestinationIfNecessary(String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
ProducerConfigurationMessageHandler handler = new ProducerConfigurationMessageHandler(kafkaTemplate,
destination.getName(), producerProperties, producerFB);
if (errorChannel != null) {
handler.setSendFailureChannel(errorChannel);
}
KafkaTopicUtils.validateTopicName(name);
createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(name, properties.getPartitionCount());
Collection<PartitionInfo> partitions = getPartitionsForTopic(name, properties.getPartitionCount());
if (properties.getPartitionCount() < partitions.size()) {
if (this.logger.isInfoEnabled()) {
this.logger.info("The `partitionCount` of the producer for topic " + name + " is "
+ properties.getPartitionCount() + ", smaller than the actual partition count of "
+ partitions.size() + " of the topic. The larger number will be used instead.");
}
}
this.topicsInUse.put(name, partitions);
return name;
return handler;
}
private DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties) {
Map<String, Object> props = new HashMap<>();
if (!ObjectUtils.isEmpty(configurationProperties.getConfiguration())) {
props.putAll(configurationProperties.getConfiguration());
}
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configurationProperties.getKafkaConnectionString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, String.valueOf(producerProperties.getExtension().getBufferSize()));
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.ACKS_CONFIG, String.valueOf(this.configurationProperties.getRequiredAcks()));
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
if (!ObjectUtils.isEmpty(configurationProperties.getProducerConfiguration())) {
props.putAll(configurationProperties.getProducerConfiguration());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
@@ -263,138 +244,207 @@ public class KafkaMessageChannelBinder extends
}
@Override
protected Collection<PartitionInfo> createConsumerDestinationIfNecessary(String name, String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
KafkaTopicUtils.validateTopicName(name);
if (properties.getInstanceCount() == 0) {
throw new IllegalArgumentException("Instance count cannot be zero");
}
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(name, partitionCount);
Collection<PartitionInfo> allPartitions = getPartitionsForTopic(name, partitionCount);
@SuppressWarnings("unchecked")
protected MessageProducer createConsumerEndpoint(final ConsumerDestination destination, final String group,
final ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties) {
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !extendedConsumerProperties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
String consumerGroup = anonymous ? "anonymous." + UUID.randomUUID().toString() : group;
final ConsumerFactory<?, ?> consumerFactory = createKafkaConsumerFactory(anonymous, consumerGroup,
extendedConsumerProperties);
int partitionCount = extendedConsumerProperties.getInstanceCount()
* extendedConsumerProperties.getConcurrency();
Collection<PartitionInfo> allPartitions = provisioningProvider.getPartitionsForTopic(partitionCount,
extendedConsumerProperties.getExtension().isAutoRebalanceEnabled(),
new Callable<Collection<PartitionInfo>>() {
@Override
public Collection<PartitionInfo> call() throws Exception {
Consumer<?, ?> consumer = consumerFactory.createConsumer();
List<PartitionInfo> partitionsFor = consumer.partitionsFor(destination.getName());
consumer.close();
return partitionsFor;
}
});
Collection<PartitionInfo> listenedPartitions;
if (properties.getExtension().isAutoRebalanceEnabled() ||
properties.getInstanceCount() == 1) {
if (extendedConsumerProperties.getExtension().isAutoRebalanceEnabled() ||
extendedConsumerProperties.getInstanceCount() == 1) {
listenedPartitions = allPartitions;
}
else {
listenedPartitions = new ArrayList<>();
for (PartitionInfo partition : allPartitions) {
// divide partitions across modules
if ((partition.partition() % properties.getInstanceCount()) == properties.getInstanceIndex()) {
if ((partition.partition()
% extendedConsumerProperties.getInstanceCount()) == extendedConsumerProperties
.getInstanceIndex()) {
listenedPartitions.add(partition);
}
}
}
this.topicsInUse.put(name, listenedPartitions);
return listenedPartitions;
}
this.topicsInUse.put(destination.getName(), new TopicInformation(group, listenedPartitions));
@Override
@SuppressWarnings("unchecked")
protected MessageProducer createConsumerEndpoint(String name, String group, Collection<PartitionInfo> destination,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !properties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
String consumerGroup = anonymous ? "anonymous." + UUID.randomUUID().toString() : group;
Map<String, Object> props = getConsumerConfig(anonymous, consumerGroup);
if (!ObjectUtils.isEmpty(properties.getExtension().getConfiguration())) {
props.putAll(properties.getExtension().getConfiguration());
}
ConsumerFactory<?, ?> consumerFactory = new DefaultKafkaConsumerFactory<>(props);
Collection<PartitionInfo> listenedPartitions = destination;
Assert.isTrue(!CollectionUtils.isEmpty(listenedPartitions), "A list of partitions must be provided");
final TopicPartitionInitialOffset[] topicPartitionInitialOffsets = getTopicPartitionInitialOffsets(
listenedPartitions);
final ContainerProperties containerProperties =
anonymous || properties.getExtension().isAutoRebalanceEnabled() ? new ContainerProperties(name)
final ContainerProperties containerProperties = anonymous
|| extendedConsumerProperties.getExtension().isAutoRebalanceEnabled()
? new ContainerProperties(destination.getName())
: new ContainerProperties(topicPartitionInitialOffsets);
int concurrency = Math.min(properties.getConcurrency(), listenedPartitions.size());
int concurrency = Math.min(extendedConsumerProperties.getConcurrency(), listenedPartitions.size());
@SuppressWarnings("rawtypes")
final ConcurrentMessageListenerContainer<?, ?> messageListenerContainer =
new ConcurrentMessageListenerContainer(
consumerFactory, containerProperties) {
@Override
public void stop(Runnable callback) {
super.stop(callback);
}
};
@Override
public void stop(Runnable callback) {
super.stop(callback);
}
};
messageListenerContainer.setConcurrency(concurrency);
messageListenerContainer.getContainerProperties().setAckOnError(isAutoCommitOnError(properties));
if (!properties.getExtension().isAutoCommitOffset()) {
messageListenerContainer.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
if (!extendedConsumerProperties.getExtension().isAutoCommitOffset()) {
messageListenerContainer.getContainerProperties()
.setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
messageListenerContainer.getContainerProperties().setAckOnError(false);
}
else {
messageListenerContainer.getContainerProperties()
.setAckOnError(isAutoCommitOnError(extendedConsumerProperties));
}
if (this.logger.isDebugEnabled()) {
this.logger.debug(
"Listened partitions: " + StringUtils.collectionToCommaDelimitedString(listenedPartitions));
}
if (this.logger.isDebugEnabled()) {
this.logger.debug(
"Listened partitions: " + StringUtils.collectionToCommaDelimitedString(listenedPartitions));
}
final KafkaMessageDrivenChannelAdapter<?, ?> kafkaMessageDrivenChannelAdapter =
new KafkaMessageDrivenChannelAdapter<>(
messageListenerContainer);
final KafkaMessageDrivenChannelAdapter<?, ?> kafkaMessageDrivenChannelAdapter = new KafkaMessageDrivenChannelAdapter<>(
messageListenerContainer);
kafkaMessageDrivenChannelAdapter.setBeanFactory(this.getBeanFactory());
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
kafkaMessageDrivenChannelAdapter.setRetryTemplate(retryTemplate);
if (properties.getExtension().isEnableDlq()) {
final String dlqTopic = "error." + name + "." + group;
initDlqProducer();
messageListenerContainer.getContainerProperties().setErrorHandler(new ErrorHandler() {
@Override
public void handle(Exception thrownException, final ConsumerRecord message) {
final byte[] key = message.key() != null ? Utils.toArray(ByteBuffer.wrap((byte[]) message.key()))
: null;
final byte[] payload = message.value() != null
? Utils.toArray(ByteBuffer.wrap((byte[]) message.value())) : null;
KafkaMessageChannelBinder.this.dlqProducer.send(new ProducerRecord<>(dlqTopic, key, payload),
new Callback() {
@Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
StringBuffer messageLog = new StringBuffer();
messageLog.append(" a message with key='"
+ toDisplayString(ObjectUtils.nullSafeToString(key), 50) + "'");
messageLog.append(" and payload='"
+ toDisplayString(ObjectUtils.nullSafeToString(payload), 50) + "'");
messageLog.append(" received from " + message.partition());
if (exception != null) {
KafkaMessageChannelBinder.this.logger.error(
"Error sending to DLQ" + messageLog.toString(), exception);
}
else {
if (KafkaMessageChannelBinder.this.logger.isDebugEnabled()) {
KafkaMessageChannelBinder.this.logger.debug(
"Sent to DLQ " + messageLog.toString());
}
}
}
});
}
});
ErrorInfrastructure errorInfrastructure = registerErrorInfrastructure(destination, consumerGroup,
extendedConsumerProperties);
if (extendedConsumerProperties.getMaxAttempts() > 1) {
kafkaMessageDrivenChannelAdapter.setRetryTemplate(buildRetryTemplate(extendedConsumerProperties));
kafkaMessageDrivenChannelAdapter.setRecoveryCallback(errorInfrastructure.getRecoverer());
}
else {
kafkaMessageDrivenChannelAdapter.setErrorChannel(errorInfrastructure.getErrorChannel());
}
return kafkaMessageDrivenChannelAdapter;
}
private Map<String, Object> getConsumerConfig(boolean anonymous, String consumerGroup) {
@Override
protected ErrorMessageStrategy getErrorMessageStrategy() {
return new RawRecordHeaderErrorMessageStrategy();
}
@Override
protected MessageHandler getErrorMessageHandler(final ConsumerDestination destination, final String group,
final ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties) {
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
DefaultKafkaProducerFactory<byte[], byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(new KafkaProducerProperties()));
final KafkaTemplate<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
return new MessageHandler() {
@Override
public void handleMessage(Message<?> message) throws MessagingException {
final ConsumerRecord<?, ?> record = message.getHeaders()
.get(KafkaMessageDrivenChannelAdapter.KAFKA_RAW_DATA, ConsumerRecord.class);
final byte[] key = record.key() != null
? Utils.toArray(ByteBuffer.wrap((byte[]) record.key()))
: null;
final byte[] payload;
if (HeaderMode.embeddedHeaders == extendedConsumerProperties.getHeaderMode()
&& message.getPayload() instanceof Throwable) {
final Throwable throwable = (Throwable) message.getPayload();
final String failureMessage = throwable.getMessage();
try {
MessageValues messageValues = EmbeddedHeaderUtils
.extractHeaders(MessageBuilder.withPayload((byte[]) record.value()).build(),
false);
messageValues.put(X_ORIGINAL_TOPIC, record.topic());
messageValues.put(X_EXCEPTION_MESSAGE, failureMessage);
messageValues.put(X_EXCEPTION_STACKTRACE, getStackTraceAsString(throwable));
final String[] headersToEmbed = new ArrayList<>(messageValues.keySet()).toArray(
new String[messageValues.keySet().size()]);
payload = EmbeddedHeaderUtils.embedHeaders(messageValues,
EmbeddedHeaderUtils.headersToEmbed(headersToEmbed));
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
else {
payload = record.value() != null
? Utils.toArray(ByteBuffer.wrap((byte[]) record.value())) : null;
}
String dlqName = StringUtils.hasText(extendedConsumerProperties.getExtension().getDlqName())
? extendedConsumerProperties.getExtension().getDlqName()
: "error." + destination.getName() + "." + group;
ListenableFuture<SendResult<byte[], byte[]>> sentDlq = kafkaTemplate.send(dlqName,
record.partition(), key, payload);
sentDlq.addCallback(new ListenableFutureCallback<SendResult<byte[], byte[]>>() {
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key), 50)).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(payload), 50))
.append("'").append(" received from ")
.append(record.partition());
@Override
public void onFailure(Throwable ex) {
KafkaMessageChannelBinder.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[], byte[]> result) {
if (KafkaMessageChannelBinder.this.logger.isDebugEnabled()) {
KafkaMessageChannelBinder.this.logger.debug(
"Sent to DLQ " + sb.toString());
}
}
});
}
};
}
return null;
}
private ConsumerFactory<?, ?> createKafkaConsumerFactory(boolean anonymous, String consumerGroup,
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
if (!ObjectUtils.isEmpty(configurationProperties.getConfiguration())) {
props.putAll(configurationProperties.getConfiguration());
}
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configurationProperties.getKafkaConnectionString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroup);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
anonymous ? "latest" : "earliest");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 100);
return props;
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, anonymous ? "latest" : "earliest");
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroup);
if (!ObjectUtils.isEmpty(configurationProperties.getConsumerConfiguration())) {
props.putAll(configurationProperties.getConsumerConfiguration());
}
if (ObjectUtils.isEmpty(props.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configurationProperties.getKafkaConnectionString());
}
if (!ObjectUtils.isEmpty(consumerProperties.getExtension().getConfiguration())) {
props.putAll(consumerProperties.getExtension().getConfiguration());
}
if (!ObjectUtils.isEmpty(consumerProperties.getExtension().getStartOffset())) {
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
consumerProperties.getExtension().getStartOffset().name());
}
return new DefaultKafkaConsumerFactory<>(props);
}
private boolean isAutoCommitOnError(ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
@@ -405,8 +455,8 @@ public class KafkaMessageChannelBinder extends
private TopicPartitionInitialOffset[] getTopicPartitionInitialOffsets(
Collection<PartitionInfo> listenedPartitions) {
final TopicPartitionInitialOffset[] topicPartitionInitialOffsets =
new TopicPartitionInitialOffset[listenedPartitions.size()];
final TopicPartitionInitialOffset[] topicPartitionInitialOffsets = new TopicPartitionInitialOffset[listenedPartitions
.size()];
int i = 0;
for (PartitionInfo partition : listenedPartitions) {
@@ -416,132 +466,6 @@ public class KafkaMessageChannelBinder extends
return topicPartitionInitialOffsets;
}
private void createTopicsIfAutoCreateEnabledAndAdminUtilsPresent(final String topicName, final int partitionCount) {
if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation != null) {
createTopicAndPartitions(topicName, partitionCount);
}
else if (this.configurationProperties.isAutoCreateTopics() && adminUtilsOperation == null) {
this.logger.warn("Auto creation of topics is enabled, but Kafka AdminUtils class is not present on the classpath. " +
"No topic will be created by the binder");
}
else if (!this.configurationProperties.isAutoCreateTopics()) {
this.logger.info("Auto creation of topics is disabled.");
}
}
/**
* Creates a Kafka topic if needed, or try to increase its partition count to the
* desired number.
*/
private void createTopicAndPartitions(final String topicName, final int partitionCount) {
final ZkUtils zkUtils = ZkUtils.apply(this.configurationProperties.getZkConnectionString(),
this.configurationProperties.getZkSessionTimeout(),
this.configurationProperties.getZkConnectionTimeout(),
JaasUtils.isZkSecurityEnabled());
try {
short errorCode = adminUtilsOperation.errorCodeFromTopicMetadata(topicName, zkUtils);
if (errorCode == ErrorMapping.NoError()) {
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties.isAutoAddPartitions()
? Math.max(this.configurationProperties.getMinPartitionCount(), partitionCount)
: partitionCount;
int partitionSize = adminUtilsOperation.partitionSize(topicName, zkUtils);
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
adminUtilsOperation.invokeAddPartitions(zkUtils, topicName, effectivePartitionCount, null, false);
}
else {
throw new BinderException("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "Consider either increasing the partition count of the topic or enabling " +
"`autoAddPartitions`");
}
}
}
else if (errorCode == ErrorMapping.UnknownTopicOrPartitionCode()) {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(this.configurationProperties.getMinPartitionCount(),
partitionCount);
this.metadataRetryOperations.execute(new RetryCallback<Object, RuntimeException>() {
@Override
public Object doWithRetry(RetryContext context) throws RuntimeException {
adminUtilsOperation.invokeCreateTopic(zkUtils, topicName, effectivePartitionCount,
configurationProperties.getReplicationFactor(), new Properties());
return null;
}
});
}
else {
throw new BinderException("Error fetching Kafka topic metadata: ",
ErrorMapping.exceptionFor(errorCode));
}
}
finally {
zkUtils.close();
}
}
private Collection<PartitionInfo> getPartitionsForTopic(final String topicName, final int partitionCount) {
try {
return this.metadataRetryOperations
.execute(new RetryCallback<Collection<PartitionInfo>, Exception>() {
@Override
public Collection<PartitionInfo> doWithRetry(RetryContext context) throws Exception {
Collection<PartitionInfo> partitions =
getProducerFactory(
new ExtendedProducerProperties<>(new KafkaProducerProperties()))
.createProducer().partitionsFor(topicName);
// do a sanity check on the partition set
if (partitions.size() < partitionCount) {
throw new IllegalStateException("The number of expected partitions was: "
+ partitionCount + ", but " + partitions.size()
+ (partitions.size() > 1 ? " have " : " has ") + "been found instead");
}
return partitions;
}
});
}
catch (Exception e) {
this.logger.error("Cannot initialize Binder", e);
throw new BinderException("Cannot initialize binder:", e);
}
}
private synchronized void initDlqProducer() {
try {
if (this.dlqProducer == null) {
synchronized (this) {
if (this.dlqProducer == null) {
// we can use the producer defaults as we do not need to tune
// performance
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
this.configurationProperties.getKafkaConnectionString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
DefaultKafkaProducerFactory<byte[], byte[]> defaultKafkaProducerFactory =
new DefaultKafkaProducerFactory<>(props);
this.dlqProducer = defaultKafkaProducerFactory.createProducer();
}
}
}
}
catch (Exception e) {
throw new RuntimeException("Cannot initialize DLQ producer:", e);
}
}
private String toDisplayString(String original, int maxCharacters) {
if (original.length() <= maxCharacters) {
return original;
@@ -549,6 +473,13 @@ public class KafkaMessageChannelBinder extends
return original.substring(0, maxCharacters) + "...";
}
private String getStackTraceAsString(Throwable cause) {
StringWriter stringWriter = new StringWriter();
PrintWriter printWriter = new PrintWriter(stringWriter, true);
cause.printStackTrace(printWriter);
return stringWriter.getBuffer().toString();
}
private final class ProducerConfigurationMessageHandler extends KafkaProducerMessageHandler<byte[], byte[]>
implements Lifecycle {
@@ -556,15 +487,16 @@ public class KafkaMessageChannelBinder extends
private final DefaultKafkaProducerFactory<byte[], byte[]> producerFactory;
private ProducerConfigurationMessageHandler(KafkaTemplate<byte[], byte[]> kafkaTemplate, String topic,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
DefaultKafkaProducerFactory<byte[], byte[]> producerFactory) {
ProducerConfigurationMessageHandler(KafkaTemplate<byte[], byte[]> kafkaTemplate, String topic,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
DefaultKafkaProducerFactory<byte[], byte[]> producerFactory) {
super(kafkaTemplate);
setTopicExpression(new LiteralExpression(topic));
setMessageKeyExpression(producerProperties.getExtension().getMessageKeyExpression());
setBeanFactory(KafkaMessageChannelBinder.this.getBeanFactory());
if (producerProperties.isPartitioned()) {
SpelExpressionParser parser = new SpelExpressionParser();
setPartitionIdExpression(parser.parseExpression("headers.partition"));
setPartitionIdExpression(parser.parseExpression("headers['" + BinderHeaders.PARTITION_HEADER + "']"));
}
if (producerProperties.getExtension().isSync()) {
setSync(true);
@@ -594,4 +526,30 @@ public class KafkaMessageChannelBinder extends
return this.running;
}
}
public static class TopicInformation {
private final String consumerGroup;
private final Collection<PartitionInfo> partitionInfos;
public TopicInformation(String consumerGroup, Collection<PartitionInfo> partitionInfos) {
this.consumerGroup = consumerGroup;
this.partitionInfos = partitionInfos;
}
public String getConsumerGroup() {
return consumerGroup;
}
public boolean isConsumerTopic() {
return consumerGroup != null;
}
public Collection<PartitionInfo> getPartitionInfos() {
return partitionInfos;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2015-2016 the original author or authors.
* Copyright 2015-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,24 +17,33 @@
package org.springframework.cloud.stream.binder.kafka.config;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.utils.AppInfoParser;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.PropertyPlaceholderAutoConfiguration;
import org.springframework.boot.actuate.endpoint.PublicMetrics;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderHealthIndicator;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderJaasInitializerListener;
import org.springframework.cloud.stream.binder.kafka.KafkaExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.KafkaBinderMetrics;
import org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka09AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.JaasLoginModuleConfiguration;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.config.codec.kryo.KryoCodecAutoConfiguration;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationListener;
@@ -46,8 +55,11 @@ import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.integration.codec.Codec;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.LoggingProducerListener;
import org.springframework.kafka.support.ProducerListener;
import org.springframework.util.ObjectUtils;
/**
* @author David Turanski
@@ -55,11 +67,13 @@ import org.springframework.kafka.support.ProducerListener;
* @author Soby Chacko
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
* @author Henryk Konsek
* @author Gary Russell
*/
@Configuration
@ConditionalOnMissingBean(Binder.class)
@Import({KryoCodecAutoConfiguration.class, PropertyPlaceholderAutoConfiguration.class})
@EnableConfigurationProperties({KafkaBinderConfigurationProperties.class, KafkaExtendedBindingProperties.class})
@Import({ KryoCodecAutoConfiguration.class, PropertyPlaceholderAutoConfiguration.class})
@EnableConfigurationProperties({ KafkaExtendedBindingProperties.class })
public class KafkaBinderConfiguration {
protected static final Log logger = LogFactory.getLog(KafkaBinderConfiguration.class);
@@ -79,17 +93,27 @@ public class KafkaBinderConfiguration {
@Autowired
private ApplicationContext context;
@Autowired (required = false)
@Autowired(required = false)
private AdminUtilsOperation adminUtilsOperation;
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder() {
KafkaBinderConfigurationProperties configurationProperties() {
return new KafkaBinderConfigurationProperties();
}
@Bean
KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties configurationProperties) {
return new KafkaTopicProvisioner(configurationProperties, this.adminUtilsOperation);
}
@Bean
KafkaMessageChannelBinder kafkaMessageChannelBinder(KafkaBinderConfigurationProperties configurationProperties,
KafkaTopicProvisioner provisioningProvider) {
KafkaMessageChannelBinder kafkaMessageChannelBinder = new KafkaMessageChannelBinder(
this.configurationProperties);
configurationProperties, provisioningProvider);
kafkaMessageChannelBinder.setCodec(this.codec);
kafkaMessageChannelBinder.setProducerListener(producerListener);
kafkaMessageChannelBinder.setExtendedBindingProperties(this.kafkaExtendedBindingProperties);
kafkaMessageChannelBinder.setAdminUtilsOperation(adminUtilsOperation);
return kafkaMessageChannelBinder;
}
@@ -100,8 +124,28 @@ public class KafkaBinderConfiguration {
}
@Bean
KafkaBinderHealthIndicator healthIndicator(KafkaMessageChannelBinder kafkaMessageChannelBinder) {
return new KafkaBinderHealthIndicator(kafkaMessageChannelBinder, this.configurationProperties);
KafkaBinderHealthIndicator healthIndicator(KafkaMessageChannelBinder kafkaMessageChannelBinder,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
if (!ObjectUtils.isEmpty(configurationProperties.getConsumerConfiguration())) {
props.putAll(configurationProperties.getConsumerConfiguration());
}
if (!props.containsKey(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
}
ConsumerFactory<?, ?> consumerFactory = new DefaultKafkaConsumerFactory<>(props);
KafkaBinderHealthIndicator indicator = new KafkaBinderHealthIndicator(kafkaMessageChannelBinder,
consumerFactory);
indicator.setTimeout(configurationProperties.getHealthTimeout());
return indicator;
}
@Bean
public PublicMetrics kafkaBinderMetrics(KafkaMessageChannelBinder kafkaMessageChannelBinder,
KafkaBinderConfigurationProperties configurationProperties) {
return new KafkaBinderMetrics(kafkaMessageChannelBinder, configurationProperties);
}
@Bean(name = "adminUtilsOperation")
@@ -132,7 +176,7 @@ public class KafkaBinderConfiguration {
return AppInfoParser.getVersion().startsWith("0.10");
}
}
static class Kafka09Present implements Condition {
@Override

View File

@@ -0,0 +1,2 @@
org.springframework.boot.env.EnvironmentPostProcessor=\
org.springframework.cloud.stream.binder.kafka.KafkaBinderEnvironmentPostProcessor

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2014-2016 the original author or authors.
* Copyright 2014-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,46 +15,40 @@
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.List;
import com.esotericsoftware.kryo.Kryo;
import com.esotericsoftware.kryo.Registration;
import org.springframework.cloud.stream.binder.AbstractTestBinder;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.integration.codec.Codec;
import org.springframework.integration.codec.kryo.KryoRegistrar;
import org.springframework.integration.codec.kryo.PojoCodec;
import org.springframework.integration.tuple.TupleKryoRegistrar;
/**
* @author Soby Chacko
* @author Gary Russell
*/
public abstract class AbstractKafkaTestBinder extends
AbstractTestBinder<KafkaMessageChannelBinder, ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>> {
private ApplicationContext applicationContext;
@Override
public void cleanup() {
// do nothing - the rule will take care of that
}
protected static Codec getCodec() {
return new PojoCodec(new TupleRegistrar());
protected final void setApplicationContext(ApplicationContext context) {
this.applicationContext = context;
}
private static class TupleRegistrar implements KryoRegistrar {
private final TupleKryoRegistrar delegate = new TupleKryoRegistrar();
public ApplicationContext getApplicationContext() {
return this.applicationContext;
}
@Override
public void registerTypes(Kryo kryo) {
this.delegate.registerTypes(kryo);
}
@Override
public List<Registration> getRegistrations() {
return this.delegate.getRegistrations();
}
protected static Codec getCodec() {
return new PojoCodec(new TupleKryoRegistrar());
}
}

View File

@@ -1,164 +0,0 @@
/*
* Copyright 2014-2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
import org.I0Itec.zkclient.ZkClient;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.junit.Before;
import org.junit.ClassRule;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.Spy;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka09AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.retry.RetryOperations;
/**
* Integration tests for the {@link KafkaMessageChannelBinder}.
*
* @author Eric Bottard
* @author Marius Bogoevici
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
*/
public class Kafka09BinderTests extends KafkaBinderTests {
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class.getSimpleName();
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
private Kafka09TestBinder binder;
private Kafka09AdminUtilsOperation adminUtilsOperation = new Kafka09AdminUtilsOperation();
@Override
protected void binderBindUnbindLatency() throws InterruptedException {
Thread.sleep(500);
}
@Override
protected Kafka09TestBinder getBinder() {
if (binder == null) {
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
binder = new Kafka09TestBinder(binderConfiguration);
}
return binder;
}
protected KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
}
String[] foo = new String[bAddresses.size()];
binderConfiguration.setBrokers(bAddresses.toArray(foo));
binderConfiguration.setZkNodes(embeddedKafka.getZookeeperConnectionString());
return binderConfiguration;
}
@Override
protected int partitionSize(String topic) {
return consumerFactory().createConsumer().partitionsFor(topic).size();
}
@Override
protected void setMetadataRetryOperations(Binder binder, RetryOperations retryOperations) {
((Kafka09TestBinder) binder).getBinder().setMetadataRetryOperations(retryOperations);
}
@Override
protected ZkUtils getZkUtils(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
final ZkClient zkClient = new ZkClient(kafkaBinderConfigurationProperties.getZkConnectionString(),
kafkaBinderConfigurationProperties.getZkSessionTimeout(), kafkaBinderConfigurationProperties.getZkConnectionTimeout(),
ZKStringSerializer$.MODULE$);
return new ZkUtils(zkClient, null, false);
}
@Override
protected void invokeCreateTopic(ZkUtils zkUtils, String topic, int partitions, int replicationFactor, Properties topicConfig) {
adminUtilsOperation.invokeCreateTopic(zkUtils, topic, partitions, replicationFactor, new Properties());
}
@Override
protected int invokePartitionSize(String topic, ZkUtils zkUtils) {
return adminUtilsOperation.partitionSize(topic, zkUtils);
}
@Override
public String getKafkaOffsetHeaderKey() {
return KafkaHeaders.OFFSET;
}
@Override
protected Binder getBinder(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
return new Kafka09TestBinder(kafkaBinderConfigurationProperties);
}
@Before
public void init() {
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
if (multiplier != null) {
timeoutMultiplier = Double.parseDouble(multiplier);
}
}
@Override
protected boolean usesExplicitRouting() {
return false;
}
@Override
protected String getClassUnderTestName() {
return CLASS_UNDER_TEST_NAME;
}
@Override
public Spy spyOn(final String name) {
throw new UnsupportedOperationException("'spyOn' is not used by Kafka tests");
}
private ConsumerFactory<byte[], byte[]> consumerFactory() {
Map<String, Object> props = new HashMap<>();
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
Deserializer<byte[]> valueDecoder = new ByteArrayDeserializer();
Deserializer<byte[]> keyDecoder = new ByteArrayDeserializer();
return new DefaultKafkaConsumerFactory<>(props, keyDecoder, valueDecoder);
}
}

View File

@@ -1,54 +0,0 @@
/*
* Copyright 2015-2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka09AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfigurationProperties;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.kafka.support.LoggingProducerListener;
import org.springframework.kafka.support.ProducerListener;
/**
* Test support class for {@link KafkaMessageChannelBinder}. Creates a binder that uses
* an embedded Kafka cluster.
* @author Eric Bottard
* @author Marius Bogoevici
* @author David Turanski
* @author Gary Russell
* @author Soby Chacko
*/
public class Kafka09TestBinder extends AbstractKafkaTestBinder {
public Kafka09TestBinder(KafkaBinderConfigurationProperties binderConfiguration) {
try {
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(binderConfiguration);
binder.setCodec(getCodec());
ProducerListener producerListener = new LoggingProducerListener();
binder.setProducerListener(producerListener);
GenericApplicationContext context = new GenericApplicationContext();
context.refresh();
binder.setApplicationContext(context);
binder.setAdminUtilsOperation(new Kafka09AdminUtilsOperation());
binder.afterPropertiesSet();
this.setBinder(binder);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
}

View File

@@ -0,0 +1,135 @@
/*
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.LongSerializer;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.test.context.TestPropertySource;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.util.ReflectionUtils;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
/**
* @author Ilayaperumal Gopinathan
*/
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = { KafkaBinderAutoConfigurationPropertiesTest.KafkaBinderConfigProperties.class,
KafkaBinderConfiguration.class })
@TestPropertySource(locations = "classpath:binder-config-autoconfig.properties")
public class KafkaBinderAutoConfigurationPropertiesTest {
@Autowired
private KafkaMessageChannelBinder kafkaMessageChannelBinder;
@Autowired
private KafkaBinderHealthIndicator kafkaBinderHealthIndicator;
@Test
public void testKafkaBinderConfigurationWithKafkaProperties() throws Exception {
assertNotNull(this.kafkaMessageChannelBinder);
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
Method getProducerFactoryMethod = KafkaMessageChannelBinder.class.getDeclaredMethod("getProducerFactory",
ExtendedProducerProperties.class);
getProducerFactoryMethod.setAccessible(true);
DefaultKafkaProducerFactory producerFactory = (DefaultKafkaProducerFactory) getProducerFactoryMethod
.invoke(this.kafkaMessageChannelBinder, producerProperties);
Field producerFactoryConfigField = ReflectionUtils.findField(DefaultKafkaProducerFactory.class, "configs",
Map.class);
ReflectionUtils.makeAccessible(producerFactoryConfigField);
Map<String, Object> producerConfigs = (Map<String, Object>) ReflectionUtils.getField(producerFactoryConfigField,
producerFactory);
assertTrue(producerConfigs.get("batch.size").equals(10));
assertTrue(producerConfigs.get("key.serializer").equals(LongSerializer.class));
assertTrue(producerConfigs.get("key.deserializer") == null);
assertTrue(producerConfigs.get("value.serializer").equals(LongSerializer.class));
assertTrue(producerConfigs.get("value.deserializer") == null);
assertTrue(producerConfigs.get("compression.type").equals("snappy"));
List<String> bootstrapServers = new ArrayList<>();
bootstrapServers.add("10.98.09.199:9092");
bootstrapServers.add("10.98.09.196:9092");
assertTrue((((List<String>) producerConfigs.get("bootstrap.servers")).containsAll(bootstrapServers)));
Method createKafkaConsumerFactoryMethod = KafkaMessageChannelBinder.class.getDeclaredMethod(
"createKafkaConsumerFactory", boolean.class, String.class, ExtendedConsumerProperties.class);
createKafkaConsumerFactoryMethod.setAccessible(true);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = new ExtendedConsumerProperties<>(
new KafkaConsumerProperties());
DefaultKafkaConsumerFactory consumerFactory = (DefaultKafkaConsumerFactory) createKafkaConsumerFactoryMethod
.invoke(this.kafkaMessageChannelBinder, true, "test", consumerProperties);
Field consumerFactoryConfigField = ReflectionUtils.findField(DefaultKafkaConsumerFactory.class, "configs",
Map.class);
ReflectionUtils.makeAccessible(consumerFactoryConfigField);
Map<String, Object> consumerConfigs = (Map<String, Object>) ReflectionUtils.getField(consumerFactoryConfigField,
consumerFactory);
assertTrue(consumerConfigs.get("key.deserializer").equals(LongDeserializer.class));
assertTrue(consumerConfigs.get("key.serializer") == null);
assertTrue(consumerConfigs.get("value.deserializer").equals(LongDeserializer.class));
assertTrue(consumerConfigs.get("value.serialized") == null);
assertTrue(consumerConfigs.get("group.id").equals("groupIdFromBootConfig"));
assertTrue(consumerConfigs.get("auto.offset.reset").equals("earliest"));
assertTrue((((List<String>) consumerConfigs.get("bootstrap.servers")).containsAll(bootstrapServers)));
}
@Test
public void testKafkaHealthIndicatorProperties() {
assertNotNull(this.kafkaBinderHealthIndicator);
Field consumerFactoryField = ReflectionUtils.findField(KafkaBinderHealthIndicator.class, "consumerFactory",
ConsumerFactory.class);
ReflectionUtils.makeAccessible(consumerFactoryField);
DefaultKafkaConsumerFactory consumerFactory = (DefaultKafkaConsumerFactory) ReflectionUtils.getField(
consumerFactoryField, this.kafkaBinderHealthIndicator);
Field configField = ReflectionUtils.findField(DefaultKafkaConsumerFactory.class, "configs", Map.class);
ReflectionUtils.makeAccessible(configField);
Map<String, Object> configs = (Map<String, Object>) ReflectionUtils.getField(configField, consumerFactory);
assertTrue(configs.containsKey("bootstrap.servers"));
List<String> bootstrapServers = new ArrayList<>();
bootstrapServers.add("10.98.09.199:9092");
bootstrapServers.add("10.98.09.196:9092");
assertTrue(((List<String>) configs.get("bootstrap.servers")).containsAll(bootstrapServers));
}
public static class KafkaBinderConfigProperties {
@Bean
KafkaProperties kafkaProperties() {
return new KafkaProperties();
}
}
}

View File

@@ -0,0 +1,100 @@
/*
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.test.context.TestPropertySource;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.util.ReflectionUtils;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
/**
* @author Ilayaperumal Gopinathan
*/
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = { KafkaBinderConfiguration.class, KafkaBinderConfigurationPropertiesTest.class })
@TestPropertySource(locations = "classpath:binder-config.properties")
public class KafkaBinderConfigurationPropertiesTest {
@Autowired
private KafkaMessageChannelBinder kafkaMessageChannelBinder;
@Test
public void testKafkaBinderConfigurationProperties() throws Exception {
assertNotNull(this.kafkaMessageChannelBinder);
KafkaProducerProperties kafkaProducerProperties = new KafkaProducerProperties();
kafkaProducerProperties.setBufferSize(12345);
kafkaProducerProperties.setBatchTimeout(100);
kafkaProducerProperties.setCompressionType(KafkaProducerProperties.CompressionType.gzip);
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = new ExtendedProducerProperties<>(
kafkaProducerProperties);
Method getProducerFactoryMethod = KafkaMessageChannelBinder.class.getDeclaredMethod("getProducerFactory",
ExtendedProducerProperties.class);
getProducerFactoryMethod.setAccessible(true);
DefaultKafkaProducerFactory producerFactory = (DefaultKafkaProducerFactory) getProducerFactoryMethod
.invoke(this.kafkaMessageChannelBinder, producerProperties);
Field producerFactoryConfigField = ReflectionUtils.findField(DefaultKafkaProducerFactory.class, "configs",
Map.class);
ReflectionUtils.makeAccessible(producerFactoryConfigField);
Map<String, Object> producerConfigs = (Map<String, Object>) ReflectionUtils.getField(producerFactoryConfigField,
producerFactory);
assertTrue(producerConfigs.get("batch.size").equals("12345"));
assertTrue(producerConfigs.get("linger.ms").equals("100"));
assertTrue(producerConfigs.get("key.serializer").equals(ByteArraySerializer.class));
assertTrue(producerConfigs.get("value.serializer").equals(ByteArraySerializer.class));
assertTrue(producerConfigs.get("compression.type").equals("gzip"));
List<String> bootstrapServers = new ArrayList<>();
bootstrapServers.add("10.98.09.199:9082");
assertTrue((((String) producerConfigs.get("bootstrap.servers")).contains("10.98.09.199:9082")));
Method createKafkaConsumerFactoryMethod = KafkaMessageChannelBinder.class.getDeclaredMethod(
"createKafkaConsumerFactory", boolean.class, String.class, ExtendedConsumerProperties.class);
createKafkaConsumerFactoryMethod.setAccessible(true);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = new ExtendedConsumerProperties<>(
new KafkaConsumerProperties());
DefaultKafkaConsumerFactory consumerFactory = (DefaultKafkaConsumerFactory) createKafkaConsumerFactoryMethod
.invoke(this.kafkaMessageChannelBinder, true, "test", consumerProperties);
Field consumerFactoryConfigField = ReflectionUtils.findField(DefaultKafkaConsumerFactory.class, "configs",
Map.class);
ReflectionUtils.makeAccessible(consumerFactoryConfigField);
Map<String, Object> consumerConfigs = (Map<String, Object>) ReflectionUtils.getField(consumerFactoryConfigField,
consumerFactory);
assertTrue(consumerConfigs.get("key.deserializer").equals(ByteArrayDeserializer.class));
assertTrue(consumerConfigs.get("value.deserializer").equals(ByteArrayDeserializer.class));
assertTrue((((String) consumerConfigs.get("bootstrap.servers")).contains("10.98.09.199:9082")));
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -15,8 +15,6 @@
*/
package org.springframework.cloud.stream.binder.kafka;
import static org.junit.Assert.assertNotNull;
import java.lang.reflect.Field;
import org.junit.Test;
@@ -29,11 +27,13 @@ import org.springframework.kafka.support.ProducerListener;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.util.ReflectionUtils;
import static org.junit.Assert.assertNotNull;
/**
* @author Ilayaperumal Gopinathan
*/
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = KafkaBinderConfiguration.class)
@SpringBootTest(classes = { KafkaBinderConfiguration.class, KafkaBinderConfigurationTest.class })
public class KafkaBinderConfigurationTest {
@Autowired
@@ -50,4 +50,5 @@ public class KafkaBinderConfigurationTest {
producerListenerField, this.kafkaMessageChannelBinder);
assertNotNull(producerListener);
}
}

View File

@@ -0,0 +1,114 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
import static org.mockito.Mockito.verify;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.PartitionInfo;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.Status;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
/**
* @author Barry Commins
* @author Gary Russell
*/
public class KafkaBinderHealthIndicatorTest {
private static final String TEST_TOPIC = "test";
private KafkaBinderHealthIndicator indicator;
@Mock
private DefaultKafkaConsumerFactory consumerFactory;
@Mock
private KafkaConsumer consumer;
@Mock
private KafkaMessageChannelBinder binder;
private final Map<String, KafkaMessageChannelBinder.TopicInformation> topicsInUse = new HashMap<>();
@Before
public void setup() {
MockitoAnnotations.initMocks(this);
given(consumerFactory.createConsumer()).willReturn(consumer);
given(binder.getTopicsInUse()).willReturn(topicsInUse);
this.indicator = new KafkaBinderHealthIndicator(binder, consumerFactory);
this.indicator.setTimeout(10);
}
@Test
public void kafkaBinderIsUp() {
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.UP);
verify(this.consumer).close();
}
@Test
public void kafkaBinderIsDown() {
final List<PartitionInfo> partitions = partitions(new Node(-1, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
}
@Test(timeout = 5000)
public void kafkaBinderDoesNotAnswer() {
final List<PartitionInfo> partitions = partitions(new Node(-1, null, 0));
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willAnswer(new Answer<Object>() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
final int fiveMinutes = 1000 * 60 * 5;
Thread.sleep(fiveMinutes);
return partitions;
}
});
this.indicator.setTimeout(1);
Health health = indicator.health();
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
}
private List<PartitionInfo> partitions(Node leader) {
List<PartitionInfo> partitions = new ArrayList<>();
partitions.add(new PartitionInfo(TEST_TOPIC, 0, leader, null, null));
return partitions;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,7 @@ package org.springframework.cloud.stream.binder.kafka;
import javax.security.auth.login.AppConfigurationEntry;
import com.sun.security.auth.login.ConfigFile;
import org.apache.kafka.common.security.JaasUtils;
import org.junit.Test;

View File

@@ -0,0 +1,137 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
import org.springframework.boot.actuate.metrics.Metric;
import org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.TopicInformation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import static java.util.Collections.singletonMap;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.BDDMockito.given;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyCollectionOf;
import static org.springframework.cloud.stream.binder.kafka.KafkaBinderMetrics.METRIC_PREFIX;
/**
* @author Henryk Konsek
*/
public class KafkaBinderMetricsTest {
private static final String TEST_TOPIC = "test";
private KafkaBinderMetrics metrics;
@Mock
private DefaultKafkaConsumerFactory consumerFactory;
@Mock
private KafkaConsumer consumer;
@Mock
private KafkaMessageChannelBinder binder;
private Map<String, TopicInformation> topicsInUse = new HashMap<>();
@Mock
private KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties;
@Before
public void setup() {
MockitoAnnotations.initMocks(this);
given(consumerFactory.createConsumer()).willReturn(consumer);
given(binder.getTopicsInUse()).willReturn(topicsInUse);
metrics = new KafkaBinderMetrics(binder, kafkaBinderConfigurationProperties, consumerFactory);
given(consumer.endOffsets(anyCollectionOf(TopicPartition.class)))
.willReturn(singletonMap(new TopicPartition(TEST_TOPIC, 0), 1000L));
}
@Test
public void shouldIndicateLag() {
given(consumer.committed(any(TopicPartition.class))).willReturn(new OffsetAndMetadata(500));
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Collection<Metric<?>> collectedMetrics = metrics.metrics();
assertThat(collectedMetrics).hasSize(1);
assertThat(collectedMetrics.iterator().next().getName())
.isEqualTo(String.format("%s.%s.%s.lag", METRIC_PREFIX, "group", TEST_TOPIC));
assertThat(collectedMetrics.iterator().next().getValue()).isEqualTo(500L);
}
@Test
public void shouldSumUpPartitionsLags() {
Map<TopicPartition, Long> endOffsets = new HashMap<>();
endOffsets.put(new TopicPartition(TEST_TOPIC, 0), 1000L);
endOffsets.put(new TopicPartition(TEST_TOPIC, 1), 1000L);
given(consumer.endOffsets(anyCollectionOf(TopicPartition.class))).willReturn(endOffsets);
given(consumer.committed(any(TopicPartition.class))).willReturn(new OffsetAndMetadata(500));
List<PartitionInfo> partitions = partitions(new Node(0, null, 0), new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Collection<Metric<?>> collectedMetrics = metrics.metrics();
assertThat(collectedMetrics).hasSize(1);
assertThat(collectedMetrics.iterator().next().getName())
.isEqualTo(String.format("%s.%s.%s.lag", METRIC_PREFIX, "group", TEST_TOPIC));
assertThat(collectedMetrics.iterator().next().getValue()).isEqualTo(1000L);
}
@Test
public void shouldIndicateFullLagForNotCommittedGroups() {
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation("group", partitions));
given(consumer.partitionsFor(TEST_TOPIC)).willReturn(partitions);
Collection<Metric<?>> collectedMetrics = metrics.metrics();
assertThat(collectedMetrics).hasSize(1);
assertThat(collectedMetrics.iterator().next().getName())
.isEqualTo(String.format("%s.%s.%s.lag", METRIC_PREFIX, "group", TEST_TOPIC));
assertThat(collectedMetrics.iterator().next().getValue()).isEqualTo(1000L);
}
@Test
public void shouldNotCalculateLagForProducerTopics() {
List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
topicsInUse.put(TEST_TOPIC, new TopicInformation(null, partitions));
Collection<Metric<?>> collectedMetrics = metrics.metrics();
assertThat(collectedMetrics).isEmpty();
}
private List<PartitionInfo> partitions(Node... nodes) {
List<PartitionInfo> partitions = new ArrayList<>();
for (int i = 0; i < nodes.length; i++) {
partitions.add(new PartitionInfo(TEST_TOPIC, i, nodes[i], null, null));
}
return partitions;
}
}

View File

@@ -0,0 +1,82 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.lang.reflect.Method;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.Test;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.integration.test.util.TestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
/**
* @author Gary Russell
* @since 1.2.2
*
*/
public class KafkaBinderUnitTests {
@Test
public void testPropertyOverrides() throws Exception {
KafkaBinderConfigurationProperties binderConfigurationProperties = new KafkaBinderConfigurationProperties();
AdminUtilsOperation adminUtilsOperation = mock(AdminUtilsOperation.class);
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(binderConfigurationProperties,
adminUtilsOperation);
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(binderConfigurationProperties,
provisioningProvider);
KafkaConsumerProperties consumerProps = new KafkaConsumerProperties();
ExtendedConsumerProperties<KafkaConsumerProperties> ecp =
new ExtendedConsumerProperties<KafkaConsumerProperties>(consumerProps);
Method method = KafkaMessageChannelBinder.class.getDeclaredMethod("createKafkaConsumerFactory", boolean.class,
String.class, ExtendedConsumerProperties.class);
method.setAccessible(true);
// test default for anon
Object factory = method.invoke(binder, true, "foo", ecp);
Map<?, ?> configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("latest");
// test default for named
factory = method.invoke(binder, false, "foo", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("earliest");
// binder level setting
binderConfigurationProperties.setConfiguration(
Collections.singletonMap(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest"));
factory = method.invoke(binder, false, "foo", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("latest");
// consumer level setting
consumerProps.setConfiguration(Collections.singletonMap(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"));
factory = method.invoke(binder, false, "foo", ecp);
configs = TestUtils.getPropertyValue(factory, "configs", Map.class);
assertThat(configs.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)).isEqualTo("earliest");
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,258 +0,0 @@
/*
* Copyright 2015-2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.Arrays;
import org.junit.Test;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.integration.IntegrationMessageHeaderAccessor;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Marius Bogoevici
* @author David Turanski
* @author Gary Russell
* @author Mark Fisher
*/
public class RawModeKafka09BinderTests extends Kafka09BinderTests {
@Test
@Override
public void testPartitionedModuleJava() throws Exception {
Kafka09TestBinder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> properties = createProducerProperties();
properties.setHeaderMode(HeaderMode.raw);
properties.setPartitionKeyExtractorClass(RawKafkaPartitionTestSupport.class);
properties.setPartitionSelectorClass(RawKafkaPartitionTestSupport.class);
properties.setPartitionCount(6);
DirectChannel output = createBindableChannel("output", createProducerBindingProperties(properties));
output.setBeanName("test.output");
Binding<MessageChannel> outputBinding = binder.bindProducer("partJ.0", output, properties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setConcurrency(2);
consumerProperties.setInstanceCount(3);
consumerProperties.setInstanceIndex(0);
consumerProperties.setPartitioned(true);
consumerProperties.setHeaderMode(HeaderMode.raw);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
QueueChannel input0 = new QueueChannel();
input0.setBeanName("test.input0J");
Binding<MessageChannel> input0Binding = binder.bindConsumer("partJ.0", "test", input0, consumerProperties);
consumerProperties.setInstanceIndex(1);
QueueChannel input1 = new QueueChannel();
input1.setBeanName("test.input1J");
Binding<MessageChannel> input1Binding = binder.bindConsumer("partJ.0", "test", input1, consumerProperties);
consumerProperties.setInstanceIndex(2);
QueueChannel input2 = new QueueChannel();
input2.setBeanName("test.input2J");
Binding<MessageChannel> input2Binding = binder.bindConsumer("partJ.0", "test", input2, consumerProperties);
output.send(new GenericMessage<>(new byte[] {(byte) 0}));
output.send(new GenericMessage<>(new byte[] {(byte) 1}));
output.send(new GenericMessage<>(new byte[] {(byte) 2}));
Message<?> receive0 = receive(input0);
assertThat(receive0).isNotNull();
Message<?> receive1 = receive(input1);
assertThat(receive1).isNotNull();
Message<?> receive2 = receive(input2);
assertThat(receive2).isNotNull();
assertThat(Arrays.asList(((byte[]) receive0.getPayload())[0], ((byte[]) receive1.getPayload())[0],
((byte[]) receive2.getPayload())[0])).containsExactlyInAnyOrder((byte) 0, (byte) 1, (byte) 2);
input0Binding.unbind();
input1Binding.unbind();
input2Binding.unbind();
outputBinding.unbind();
}
@Test
@Override
public void testPartitionedModuleSpEL() throws Exception {
Kafka09TestBinder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> properties = createProducerProperties();
properties.setPartitionKeyExpression(spelExpressionParser.parseExpression("payload[0]"));
properties.setPartitionSelectorExpression(spelExpressionParser.parseExpression("hashCode()"));
properties.setPartitionCount(6);
properties.setHeaderMode(HeaderMode.raw);
DirectChannel output = createBindableChannel("output", createProducerBindingProperties(properties));
output.setBeanName("test.output");
Binding<MessageChannel> outputBinding = binder.bindProducer("part.0", output, properties);
try {
Object endpoint = extractEndpoint(outputBinding);
assertThat(getEndpointRouting(endpoint))
.contains(getExpectedRoutingBaseDestination("part.0", "test") + "-' + headers['partition']");
}
catch (UnsupportedOperationException ignored) {
}
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setConcurrency(2);
consumerProperties.setInstanceIndex(0);
consumerProperties.setInstanceCount(3);
consumerProperties.setPartitioned(true);
consumerProperties.setHeaderMode(HeaderMode.raw);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
QueueChannel input0 = new QueueChannel();
input0.setBeanName("test.input0S");
Binding<MessageChannel> input0Binding = binder.bindConsumer("part.0", "test", input0, consumerProperties);
consumerProperties.setInstanceIndex(1);
QueueChannel input1 = new QueueChannel();
input1.setBeanName("test.input1S");
Binding<MessageChannel> input1Binding = binder.bindConsumer("part.0", "test", input1, consumerProperties);
consumerProperties.setInstanceIndex(2);
QueueChannel input2 = new QueueChannel();
input2.setBeanName("test.input2S");
Binding<MessageChannel> input2Binding = binder.bindConsumer("part.0", "test", input2, consumerProperties);
Message<byte[]> message2 = MessageBuilder.withPayload(new byte[] {2})
.setHeader(IntegrationMessageHeaderAccessor.CORRELATION_ID, "foo")
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER, 42)
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_SIZE, 43).build();
output.send(message2);
output.send(new GenericMessage<>(new byte[] {1}));
output.send(new GenericMessage<>(new byte[] {0}));
Message<?> receive0 = receive(input0);
assertThat(receive0).isNotNull();
Message<?> receive1 = receive(input1);
assertThat(receive1).isNotNull();
Message<?> receive2 = receive(input2);
assertThat(receive2).isNotNull();
assertThat(Arrays.asList(((byte[]) receive0.getPayload())[0], ((byte[]) receive1.getPayload())[0],
((byte[]) receive2.getPayload())[0])).containsExactlyInAnyOrder((byte) 0, (byte) 1, (byte) 2);
input0Binding.unbind();
input1Binding.unbind();
input2Binding.unbind();
outputBinding.unbind();
}
@Test
@Override
public void testSendAndReceive() throws Exception {
Kafka09TestBinder binder = getBinder();
DirectChannel moduleOutputChannel = new DirectChannel();
QueueChannel moduleInputChannel = new QueueChannel();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.setHeaderMode(HeaderMode.raw);
Binding<MessageChannel> producerBinding = binder.bindProducer("foo.0", moduleOutputChannel,
producerProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setHeaderMode(HeaderMode.raw);
Binding<MessageChannel> consumerBinding = binder.bindConsumer("foo.0", "test", moduleInputChannel,
consumerProperties);
Message<?> message = MessageBuilder.withPayload("foo".getBytes()).build();
// Let the consumer actually bind to the producer before sending a msg
binderBindUnbindLatency();
moduleOutputChannel.send(message);
Message<?> inbound = receive(moduleInputChannel);
assertThat(inbound).isNotNull();
assertThat(new String((byte[]) inbound.getPayload())).isEqualTo("foo");
producerBinding.unbind();
consumerBinding.unbind();
}
@Test
public void testSendAndReceiveWithExplicitConsumerGroup() {
Kafka09TestBinder binder = getBinder();
DirectChannel moduleOutputChannel = new DirectChannel();
// Test pub/sub by emulating how StreamPlugin handles taps
QueueChannel module1InputChannel = new QueueChannel();
QueueChannel module2InputChannel = new QueueChannel();
QueueChannel module3InputChannel = new QueueChannel();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.setHeaderMode(HeaderMode.raw);
Binding<MessageChannel> producerBinding = binder.bindProducer("baz.0", moduleOutputChannel,
producerProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setHeaderMode(HeaderMode.raw);
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
Binding<MessageChannel> input1Binding = binder.bindConsumer("baz.0", "test", module1InputChannel,
consumerProperties);
// A new module is using the tap as an input channel
String fooTapName = "baz.0";
Binding<MessageChannel> input2Binding = binder.bindConsumer(fooTapName, "tap1", module2InputChannel,
consumerProperties);
// Another new module is using tap as an input channel
String barTapName = "baz.0";
Binding<MessageChannel> input3Binding = binder.bindConsumer(barTapName, "tap2", module3InputChannel,
consumerProperties);
Message<?> message = MessageBuilder.withPayload("foo".getBytes()).build();
boolean success = false;
boolean retried = false;
while (!success) {
moduleOutputChannel.send(message);
Message<?> inbound = receive(module1InputChannel);
assertThat(inbound).isNotNull();
assertThat(new String((byte[]) inbound.getPayload())).isEqualTo("foo");
Message<?> tapped1 = receive(module2InputChannel);
Message<?> tapped2 = receive(module3InputChannel);
if (tapped1 == null || tapped2 == null) {
// listener may not have started
assertThat(retried).isFalse().withFailMessage("Failed to receive tap after retry");
retried = true;
continue;
}
success = true;
assertThat(new String((byte[]) tapped1.getPayload())).isEqualTo("foo");
assertThat(new String((byte[]) tapped2.getPayload())).isEqualTo("foo");
}
// delete one tap stream is deleted
input3Binding.unbind();
Message<?> message2 = MessageBuilder.withPayload("bar".getBytes()).build();
moduleOutputChannel.send(message2);
// other tap still receives messages
Message<?> tapped = receive(module2InputChannel);
assertThat(tapped).isNotNull();
// removed tap does not
assertThat(receive(module3InputChannel)).isNull();
// re-subscribed tap does receive the message
input3Binding = binder.bindConsumer(barTapName, "tap2", module3InputChannel, createConsumerProperties());
assertThat(receive(module3InputChannel)).isNotNull();
// clean up
input1Binding.unbind();
input2Binding.unbind();
input3Binding.unbind();
producerBinding.unbind();
assertThat(extractEndpoint(input1Binding).isRunning()).isFalse();
assertThat(extractEndpoint(input2Binding).isRunning()).isFalse();
assertThat(extractEndpoint(input3Binding).isRunning()).isFalse();
assertThat(extractEndpoint(producerBinding).isRunning()).isFalse();
}
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.bootstrap;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
/**
* @author Marius Bogoevici
*/
public class KafkaBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
@Test
public void testKafkaBinderConfiguration() throws Exception {
ConfigurableApplicationContext applicationContext = new SpringApplicationBuilder(SimpleApplication.class)
.web(false)
.run("--spring.cloud.stream.kafka.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
applicationContext.close();
}
@SpringBootApplication
static class SimpleApplication {
}
}

View File

@@ -0,0 +1,10 @@
spring.kafka.producer.keySerializer=org.apache.kafka.common.serialization.LongSerializer
spring.kafka.producer.valueSerializer=org.apache.kafka.common.serialization.LongSerializer
spring.kafka.consumer.keyDeserializer=org.apache.kafka.common.serialization.LongDeserializer
spring.kafka.consumer.valueDeserializer=org.apache.kafka.common.serialization.LongDeserializer
spring.kafka.producer.batchSize=10
spring.kafka.bootstrapServers=10.98.09.199:9092,10.98.09.196:9092
spring.kafka.producer.compressionType=snappy
# Test consumer properties
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=groupIdFromBootConfig

View File

@@ -0,0 +1 @@
spring.cloud.stream.kafka.binder.brokers=10.98.09.199:9082

View File

@@ -11,4 +11,4 @@
<root level="WARN">
<appender-ref ref="stdout"/>
</root>
</configuration>
</configuration>

View File

@@ -0,0 +1,69 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kstream</artifactId>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kstream</name>
<description>Kafka Streams Binder Implementation</description>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.3.4.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-codec</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,198 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import org.apache.kafka.common.Configurable;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderHeaders;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.EmbeddedHeaderUtils;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.MessageValues;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kstream.config.KStreamConsumerProperties;
import org.springframework.cloud.stream.binder.kstream.config.KStreamExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kstream.config.KStreamProducerProperties;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.util.MimeType;
import org.springframework.util.StringUtils;
/**
* @author Marius Bogoevici
*/
public class KStreamBinder extends
AbstractBinder<KStream<Object, Object>, ExtendedConsumerProperties<KStreamConsumerProperties>, ExtendedProducerProperties<KStreamProducerProperties>>
implements ExtendedPropertiesBinder<KStream<Object, Object>, KStreamConsumerProperties, KStreamProducerProperties> {
private String[] headers;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final KStreamExtendedBindingProperties kStreamExtendedBindingProperties;
private final StreamsConfig streamsConfig;
private final KafkaBinderConfigurationProperties binderConfigurationProperties;
public KStreamBinder(KafkaBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
KStreamExtendedBindingProperties kStreamExtendedBindingProperties, StreamsConfig streamsConfig) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.headers = EmbeddedHeaderUtils.headersToEmbed(binderConfigurationProperties.getHeaders());
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kStreamExtendedBindingProperties = kStreamExtendedBindingProperties;
this.streamsConfig = streamsConfig;
}
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget, ExtendedConsumerProperties<KStreamConsumerProperties> properties) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<KafkaConsumerProperties>(
new KafkaConsumerProperties());
this.kafkaTopicProvisioner.provisionConsumerDestination(name, group, extendedConsumerProperties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name, KStream<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KStreamProducerProperties> properties) {
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<KafkaProducerProperties>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
if (HeaderMode.embeddedHeaders.equals(properties.getHeaderMode())) {
outboundBindTarget = outboundBindTarget.map(new KeyValueMapper<Object, Object, KeyValue<Object, Object>>() {
@Override
public KeyValue<Object, Object> apply(Object k, Object v) {
if (v instanceof Message) {
try {
return new KeyValue<>(k, (Object) KStreamBinder.this.serializeAndEmbedHeadersIfApplicable((Message<?>) v));
}
catch (Exception e) {
throw new IllegalArgumentException(e);
}
}
else {
throw new IllegalArgumentException("Wrong type of message " + v);
}
}
});
}
else {
if (!properties.isUseNativeEncoding()) {
outboundBindTarget = outboundBindTarget
.map(new KeyValueMapper<Object, Object, KeyValue<Object, Object>>() {
@Override
public KeyValue<Object, Object> apply(Object k, Object v) {
return KeyValue.pair(k, (Object) KStreamBinder.this.serializePayloadIfNecessary((Message<?>) v));
}
});
}
else {
outboundBindTarget = outboundBindTarget
.map(new KeyValueMapper<Object, Object, KeyValue<Object, Object>>() {
@Override
public KeyValue<Object, Object> apply(Object k, Object v) {
return KeyValue.pair(k, ((Message<?>) v).getPayload());
}
});
}
}
if (!properties.isUseNativeEncoding() || StringUtils.hasText(properties.getExtension().getKeySerde()) || StringUtils.hasText(properties.getExtension().getValueSerde())) {
try {
Serde<?> keySerde;
Serde<?> valueSerde;
if (StringUtils.hasText(properties.getExtension().getKeySerde())) {
keySerde = Utils.newInstance(properties.getExtension().getKeySerde(), Serde.class);
if (keySerde instanceof Configurable) {
((Configurable) keySerde).configure(streamsConfig.originals());
}
}
else {
keySerde = this.binderConfigurationProperties.getConfiguration().containsKey("key.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("key.serde"), Serde.class) : Serdes.ByteArray();
}
if (StringUtils.hasText(properties.getExtension().getValueSerde())) {
valueSerde = Utils.newInstance(properties.getExtension().getValueSerde(), Serde.class);
if (valueSerde instanceof Configurable) {
((Configurable) valueSerde).configure(streamsConfig.originals());
}
}
else {
valueSerde = this.binderConfigurationProperties.getConfiguration().containsKey("value.serde") ?
Utils.newInstance(this.binderConfigurationProperties.getConfiguration().get("value.serde"), Serde.class) : Serdes.ByteArray();
}
outboundBindTarget.to((Serde<Object>) keySerde, (Serde<Object>) valueSerde, name);
}
catch (ClassNotFoundException e) {
throw new IllegalStateException("Serde class not found: ", e);
}
}
else {
outboundBindTarget.to(name);
}
return new DefaultBinding<>(name, null, outboundBindTarget, null);
}
private byte[] serializeAndEmbedHeadersIfApplicable(Message<?> message) throws Exception {
MessageValues transformed = serializePayloadIfNecessary(message);
byte[] payload;
Object contentType = transformed.get(MessageHeaders.CONTENT_TYPE);
// transform content type headers to String, so that they can be properly embedded
// in JSON
if (contentType instanceof MimeType) {
transformed.put(MessageHeaders.CONTENT_TYPE, contentType.toString());
}
Object originalContentType = transformed.get(BinderHeaders.BINDER_ORIGINAL_CONTENT_TYPE);
if (originalContentType instanceof MimeType) {
transformed.put(BinderHeaders.BINDER_ORIGINAL_CONTENT_TYPE, originalContentType.toString());
}
payload = EmbeddedHeaderUtils.embedHeaders(transformed, headers);
return payload;
}
@Override
public KStreamConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kStreamExtendedBindingProperties.getExtendedConsumerProperties(channelName);
}
@Override
public KStreamProducerProperties getExtendedProducerProperties(String channelName) {
return this.kStreamExtendedBindingProperties.getExtendedProducerProperties(channelName);
}
}

View File

@@ -0,0 +1,167 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KStreamBuilder;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.EmbeddedHeaderUtils;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.MessageSerializationUtils;
import org.springframework.cloud.stream.binder.MessageValues;
import org.springframework.cloud.stream.binder.StringConvertingContentTypeResolver;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.integration.codec.Codec;
import org.springframework.integration.support.MutableMessageHeaders;
import org.springframework.messaging.Message;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.StringUtils;
/**
* @author Marius Bogoevici
*/
public class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final KStreamBuilder kStreamBuilder;
private final BindingServiceProperties bindingServiceProperties;
private volatile Codec codec;
private final StringConvertingContentTypeResolver contentTypeResolver = new StringConvertingContentTypeResolver();
private volatile Map<String, Class<?>> payloadTypeCache = new ConcurrentHashMap<>();
private CompositeMessageConverterFactory compositeMessageConverterFactory;
public KStreamBoundElementFactory(KStreamBuilder streamBuilder, BindingServiceProperties bindingServiceProperties,
Codec codec, CompositeMessageConverterFactory compositeMessageConverterFactory) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kStreamBuilder = streamBuilder;
this.codec = codec;
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
}
@Override
public KStream createInput(String name) {
KStream<Object, Object> stream = kStreamBuilder.stream(bindingServiceProperties.getBindingDestination(name));
ConsumerProperties properties = bindingServiceProperties.getConsumerProperties(name);
if (HeaderMode.embeddedHeaders.equals(properties.getHeaderMode())) {
stream = stream.map(new KeyValueMapper<Object, Object, KeyValue<Object, Object>>() {
@Override
public KeyValue<Object, Object> apply(Object key, Object value) {
if (!(value instanceof byte[])) {
return new KeyValue<>(key, value);
}
try {
MessageValues messageValues = EmbeddedHeaderUtils
.extractHeaders(MessageBuilder.withPayload((byte[]) value).build(), true);
messageValues = deserializePayloadIfNecessary(messageValues);
return new KeyValue<Object, Object>(null, messageValues.toMessage());
}
catch (Exception e) {
throw new IllegalArgumentException(e);
}
}
});
}
return stream;
}
@Override
@SuppressWarnings("unchecked")
public KStream createOutput(final String name) {
BindingProperties bindingProperties = bindingServiceProperties.getBindingProperties(name);
String contentType = bindingProperties.getContentType();
MessageConverter messageConverter = StringUtils.hasText(contentType) ? compositeMessageConverterFactory
.getMessageConverterForType(MimeType.valueOf(contentType)) : null;
KStreamWrapperHandler handler = new KStreamWrapperHandler(messageConverter);
ProxyFactory proxyFactory = new ProxyFactory(KStreamWrapper.class, KStream.class);
proxyFactory.addAdvice(handler);
return (KStream) proxyFactory.getProxy();
}
private MessageValues deserializePayloadIfNecessary(MessageValues messageValues) {
return MessageSerializationUtils.deserializePayload(messageValues, this.contentTypeResolver, this.codec);
}
interface KStreamWrapper {
void wrap(KStream<Object, Object> delegate);
}
static class KStreamWrapperHandler implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
private final MessageConverter messageConverter;
public KStreamWrapperHandler(MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
public void wrap(KStream<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
if (messageConverter != null) {
KeyValueMapper<Object, Object, KeyValue<Object, Object>> keyValueMapper = new KeyValueMapper<Object, Object, KeyValue<Object, Object>>() {
@Override
public KeyValue<Object, Object> apply(Object k, Object v) {
Message<?> message = (Message<?>) v;
return new KeyValue<Object, Object>(k,
messageConverter.toMessage(message.getPayload(),
new MutableMessageHeaders(((Message<?>) v).getHeaders())));
}
};
delegate = delegate.map(keyValueMapper);
}
this.delegate = delegate;
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KStream.class)) {
Assert.notNull(delegate, "Trying to invoke " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KStream method invocations are permitted");
}
}
}
}

View File

@@ -0,0 +1,75 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.messaging.Message;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
/**
* @author Marius Bogoevici
* @author Soby Chacko
*/
public class KStreamListenerParameterAdapter implements StreamListenerParameterAdapter<KStream<?,?>, KStream<?, ?>> {
private final MessageConverter messageConverter;
public KStreamListenerParameterAdapter(MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
@Override
public boolean supports(Class bindingTargetType, MethodParameter methodParameter) {
return KStream.class.isAssignableFrom(bindingTargetType)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
@Override
@SuppressWarnings("unchecked")
public KStream adapt(KStream<?, ?> bindingTarget, MethodParameter parameter) {
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
return bindingTarget.map(new KeyValueMapper() {
@Override
public Object apply(Object o, Object o2) {
if (valueClass.isAssignableFrom(o2.getClass())) {
return new KeyValue<>(o, o2);
}
else if (o2 instanceof Message) {
return new KeyValue<>(o, messageConverter.fromMessage((Message) o2, valueClass));
}
else if(o2 instanceof String || o2 instanceof byte[]) {
Message<Object> message = MessageBuilder.withPayload(o2).build();
return new KeyValue<>(o, messageConverter.fromMessage(message, valueClass));
}
else {
return new KeyValue<>(o, o2);
}
}
});
}
}

View File

@@ -0,0 +1,65 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import java.io.Closeable;
import java.io.IOException;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
/**
* @author Marius Bogoevici
*/
public class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
return KStream.class.isAssignableFrom(resultType) && KStream.class.isAssignableFrom(boundElement);
}
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult, KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult.map(new KeyValueMapper() {
@Override
public Object apply(Object k, Object v) {
if (v instanceof Message<?>) {
return new KeyValue<>(k, v);
}
else {
return new KeyValue<>(k, MessageBuilder.withPayload(v).build());
}
}
}));
return new NoOpCloseable();
}
private static final class NoOpCloseable implements Closeable {
@Override
public void close() throws IOException {
}
}
}

View File

@@ -0,0 +1,34 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.annotations;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
/**
* @author Marius Bogoevici
*/
public interface KStreamProcessor {
@Input("input")
KStream<?, ?> input();
@Output("output")
KStream<?, ?> output();
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KStreamApplicationSupportProperties.class)
public class KStreamApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kstream.timeWindow.length")
public TimeWindows configuredTimeWindow(KStreamApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength()).advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());
}
}

View File

@@ -0,0 +1,64 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
/**
* {@link ConfigurationProperties} that can be used by end user Kafka Stream applications. This class provides
* convenient ways to access the commonly used kafka stream properties from the user application. For example, windowing
* operations are common use cases in stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kstream")
public class KStreamApplicationSupportProperties {
private TimeWindow timeWindow;
public TimeWindow getTimeWindow() {
return timeWindow;
}
public void setTimeWindow(TimeWindow timeWindow) {
this.timeWindow = timeWindow;
}
public static class TimeWindow {
private int length;
private int advanceBy;
public int getLength() {
return length;
}
public void setLength(int length) {
this.length = length;
}
public int getAdvanceBy() {
return advanceBy;
}
public void setAdvanceBy(int advanceBy) {
this.advanceBy = advanceBy;
}
}
}

View File

@@ -0,0 +1,96 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.utils.AppInfoParser;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.admin.AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka09AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kstream.KStreamBinder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Condition;
import org.springframework.context.annotation.ConditionContext;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.type.AnnotatedTypeMetadata;
/**
* @author Marius Bogoevici
*/
@Configuration
@EnableConfigurationProperties(KStreamExtendedBindingProperties.class)
public class KStreamBinderConfiguration {
@Autowired(required = false)
private AdminUtilsOperation adminUtilsOperation;
private static final Log logger = LogFactory.getLog(KStreamBinderConfiguration.class);
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, adminUtilsOperation);
}
@Bean
public KStreamBinder kStreamBinder(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KStreamExtendedBindingProperties kStreamExtendedBindingProperties, StreamsConfig streamsConfig) {
return new KStreamBinder(binderConfigurationProperties, kafkaTopicProvisioner, kStreamExtendedBindingProperties,
streamsConfig);
}
@Bean(name = "adminUtilsOperation")
@Conditional(Kafka09Present.class)
@ConditionalOnClass(name = "kafka.admin.AdminUtils")
public AdminUtilsOperation kafka09AdminUtilsOperation() {
logger.info("AdminUtils selected: Kafka 0.9 AdminUtils");
return new Kafka09AdminUtilsOperation();
}
@Bean(name = "adminUtilsOperation")
@Conditional(Kafka10Present.class)
@ConditionalOnClass(name = "kafka.admin.AdminUtils")
public AdminUtilsOperation kafka10AdminUtilsOperation() {
logger.info("AdminUtils selected: Kafka 0.10 AdminUtils");
return new Kafka10AdminUtilsOperation();
}
static class Kafka10Present implements Condition {
@Override
public boolean matches(ConditionContext conditionContext, AnnotatedTypeMetadata annotatedTypeMetadata) {
return AppInfoParser.getVersion().startsWith("0.10");
}
}
static class Kafka09Present implements Condition {
@Override
public boolean matches(ConditionContext conditionContext, AnnotatedTypeMetadata annotatedTypeMetadata) {
return AppInfoParser.getVersion().startsWith("0.9");
}
}
}

View File

@@ -0,0 +1,103 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
import java.util.Properties;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStreamBuilder;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.UnsatisfiedDependencyException;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kstream.KStreamBoundElementFactory;
import org.springframework.cloud.stream.binder.kstream.KStreamListenerParameterAdapter;
import org.springframework.cloud.stream.binder.kstream.KStreamStreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.codec.Codec;
import org.springframework.kafka.annotation.KafkaStreamsDefaultConfiguration;
import org.springframework.kafka.core.KStreamBuilderFactoryBean;
import org.springframework.util.ObjectUtils;
/**
* @author Marius Bogoevici
*/
public class KStreamBinderSupportAutoConfiguration {
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kstream.binder")
public KafkaBinderConfigurationProperties binderConfigurationProperties() {
return new KafkaBinderConfigurationProperties();
}
@Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_KSTREAM_BUILDER_BEAN_NAME)
public KStreamBuilderFactoryBean defaultKStreamBuilder(
@Qualifier(KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME) ObjectProvider<StreamsConfig> streamsConfigProvider) {
StreamsConfig streamsConfig = streamsConfigProvider.getIfAvailable();
if (streamsConfig != null) {
KStreamBuilderFactoryBean kStreamBuilderFactoryBean = new KStreamBuilderFactoryBean(streamsConfig);
kStreamBuilderFactoryBean.setPhase(Integer.MAX_VALUE - 500);
return kStreamBuilderFactoryBean;
}
else {
throw new UnsatisfiedDependencyException(KafkaStreamsDefaultConfiguration.class.getName(),
KafkaStreamsDefaultConfiguration.DEFAULT_KSTREAM_BUILDER_BEAN_NAME, "streamsConfig",
"There is no '" + KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME
+ "' StreamsConfig bean in the application context.\n");
}
}
@Bean(KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig streamsConfig(KafkaBinderConfigurationProperties binderConfigurationProperties) {
Properties props = new Properties();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "default");
props.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, binderConfigurationProperties.getZkConnectionString());
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConfiguration())) {
props.putAll(binderConfigurationProperties.getConfiguration());
}
return new StreamsConfig(props);
}
@Bean
public KStreamStreamListenerResultAdapter kStreamStreamListenerResultAdapter() {
return new KStreamStreamListenerResultAdapter();
}
@Bean
public KStreamListenerParameterAdapter kStreamListenerParameterAdapter(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
return new KStreamListenerParameterAdapter(
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
}
@Bean
public KStreamBoundElementFactory kStreamBindableTargetFactory(KStreamBuilder kStreamBuilder,
BindingServiceProperties bindingServiceProperties, Codec codec,
CompositeMessageConverterFactory compositeMessageConverterFactory) {
return new KStreamBoundElementFactory(kStreamBuilder, bindingServiceProperties, codec,
compositeMessageConverterFactory);
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
/**
* @author Marius Bogoevici
*/
public class KStreamBindingProperties {
private KStreamConsumerProperties consumer = new KStreamConsumerProperties();
private KStreamProducerProperties producer = new KStreamProducerProperties();
public KStreamConsumerProperties getConsumer() {
return consumer;
}
public void setConsumer(KStreamConsumerProperties consumer) {
this.consumer = consumer;
}
public KStreamProducerProperties getProducer() {
return producer;
}
public void setProducer(KStreamProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
/**
* @author Soby Chacko
*/
public class KStreamCommonProperties {
private String keySerde;
private String valueSerde;
public String getKeySerde() {
return keySerde;
}
public void setKeySerde(String keySerde) {
this.keySerde = keySerde;
}
public String getValueSerde() {
return valueSerde;
}
public void setValueSerde(String valueSerde) {
this.valueSerde = valueSerde;
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2015 the original author or authors.
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,11 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
/**
* This package contains an implementation of the {@link org.springframework.cloud.stream.binder.Binder} for Kafka.
* @author Marius Bogoevici
*/
package org.springframework.cloud.stream.binder.kafka;
public class KStreamConsumerProperties extends KStreamCommonProperties {
}

View File

@@ -0,0 +1,61 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.ExtendedBindingProperties;
/**
* @author Marius Bogoevici
*/
@ConfigurationProperties("spring.cloud.stream.kstream")
public class KStreamExtendedBindingProperties
implements ExtendedBindingProperties<KStreamConsumerProperties, KStreamProducerProperties> {
private Map<String, KStreamBindingProperties> bindings = new HashMap<>();
public Map<String, KStreamBindingProperties> getBindings() {
return this.bindings;
}
public void setBindings(Map<String, KStreamBindingProperties> bindings) {
this.bindings = bindings;
}
@Override
public KStreamConsumerProperties getExtendedConsumerProperties(String binding) {
if (this.bindings.containsKey(binding) && this.bindings.get(binding).getConsumer() != null) {
return this.bindings.get(binding).getConsumer();
}
else {
return new KStreamConsumerProperties();
}
}
@Override
public KStreamProducerProperties getExtendedProducerProperties(String binding) {
if (this.bindings.containsKey(binding) && this.bindings.get(binding).getProducer() != null) {
return this.bindings.get(binding).getProducer();
}
else {
return new KStreamProducerProperties();
}
}
}

View File

@@ -0,0 +1,24 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
/**
* @author Marius Bogoevici
*/
public class KStreamProducerProperties extends KStreamCommonProperties {
}

View File

@@ -0,0 +1,4 @@
kstream:\
org.springframework.cloud.stream.binder.kstream.config.KStreamBinderConfiguration

View File

@@ -0,0 +1,5 @@
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kstream.config.KStreamBinderSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kstream.config.KStreamApplicationSupportAutoConfiguration

View File

@@ -0,0 +1,158 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.kstream.Windowed;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kstream.annotations.KStreamProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
*
* @author Soby Chacko
*/
public class KStreamBinderPojoInputAndPrimitiveTypeOutputTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
private static Consumer<Integer, Long> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebEnvironment(false);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kstream.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kstream.binder.configuration.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kstream.binder.configuration.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kstream.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kstream.bindings.output.producer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kstream.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kstream.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
receiveAndValidateFoo(context);
context.close();
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception{
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<Integer, Long> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
assertThat(cr.key().equals(123));
assertThat(cr.value().equals(1L));
}
@EnableBinding(KStreamProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@StreamListener("input")
@SendTo("output")
public KStream<Integer, Long> process(KStream<Object, Product> input) {
return input
.filter(new Predicate<Object, Product>() {
@Override
public boolean test(Object key, Product product) {
return product.getId() == 123;
}
})
.map(new KeyValueMapper<Object, Product, KeyValue<Product, Product>>() {
@Override
public KeyValue<Product, Product> apply(Object key, Product value) {
return new KeyValue<>(value, value);
}
})
.groupByKey(new JsonSerde<>(Product.class), new JsonSerde<>(Product.class))
.count(TimeWindows.of(5000), "id-count-store")
.toStream()
.map(new KeyValueMapper<Windowed<Product>, Long, KeyValue<Integer, Long>>() {
@Override
public KeyValue<Integer, Long> apply(Windowed<Product> key, Long value) {
return new KeyValue<>(key.key().id, value);
}
});
}
}
static class Product {
Integer id;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}
}

View File

@@ -0,0 +1,204 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.kstream.ValueMapper;
import org.apache.kafka.streams.kstream.Windowed;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kstream.annotations.KStreamProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
public class KStreamBinderWordCountIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts");
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamWordCountWithStringInputAndPojoOuput() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebEnvironment(false);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kstream.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kstream.binder.configuration.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kstream.binder.configuration.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kstream.timeWindow.length=5000",
"--spring.cloud.stream.kstream.timeWindow.advanceBy=0",
"--spring.cloud.stream.kstream.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kstream.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
receiveAndValidate(context);
context.close();
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception{
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts");
assertThat(cr.value().contains("\"word\":\"foobar\",\"count\":1")).isTrue();
}
@EnableBinding(KStreamProcessor.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo("output")
public KStream<?, WordCount> process(KStream<Object, String> input) {
return input
.flatMapValues(new ValueMapper<String, Iterable<String>>() {
@Override
public List<String> apply(String value) {
return Arrays.asList(value.toLowerCase().split("\\W+"));
}
})
.map(new KeyValueMapper<Object, String, KeyValue<String, String>>() {
@Override
public KeyValue<String, String> apply(Object key, String value) {
return new KeyValue<>(value, value);
}
})
.groupByKey(Serdes.String(), Serdes.String())
.count(timeWindows, "WordCounts")
.toStream()
.map(new KeyValueMapper<Windowed<String>, Long, KeyValue<Object, WordCount>>() {
@Override
public KeyValue<Object, WordCount> apply(Windowed<String> key, Long value) {
return new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end())));
}
});
}
}
static class WordCount {
private String word;
private long count;
private Date start;
private Date end;
WordCount(String word, long count, Date start, Date end) {
this.word = word;
this.count = count;
this.start = start;
this.end = end;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
public Date getStart() {
return start;
}
public void setStart(Date start) {
this.start = start;
}
public Date getEnd() {
return end;
}
public void setEnd(Date end) {
this.end = end;
}
}
}

View File

@@ -0,0 +1,184 @@
/*
* Copyright 2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
import java.util.Map;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.apache.kafka.streams.kstream.Predicate;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kstream.annotations.KStreamProcessor;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KStreamBuilderFactoryBean;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.SendTo;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Soby Chacko
*/
public class KStreamInteractiveQueryIntegrationTests {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "counts-id");
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
}
@AfterClass
public static void tearDown() {
consumer.close();
}
@Test
public void testKstreamBinderWithPojoInputAndStringOuput() throws Exception {
SpringApplication app = new SpringApplication(ProductCountApplication.class);
app.setWebEnvironment(false);
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.cloud.stream.bindings.input.destination=foos",
"--spring.cloud.stream.bindings.output.destination=counts-id",
"--spring.cloud.stream.kstream.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kstream.binder.configuration.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kstream.binder.configuration.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kstream.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kstream.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
receiveAndValidateFoo(context);
context.close();
}
private void receiveAndValidateFoo(ConfigurableApplicationContext context) throws Exception{
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer, "counts-id");
assertThat(cr.value().contains("Count for product with ID 123: 1")).isTrue();
ProductCountApplication.Foo foo = context.getBean(ProductCountApplication.Foo.class);
assertThat(foo.getProductStock(123).equals(1L));
}
@EnableBinding(KStreamProcessor.class)
@EnableAutoConfiguration
public static class ProductCountApplication {
@Autowired
private KStreamBuilderFactoryBean kStreamBuilderFactoryBean;
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<Object, Product> input) {
return input
.filter(new Predicate<Object, Product>() {
@Override
public boolean test(Object key, Product product) {
return product.getId() == 123;
}
})
.map(new KeyValueMapper<Object, Product, KeyValue<Integer, Product>>() {
@Override
public KeyValue<Integer, Product> apply(Object key, Product value) {
return new KeyValue<>(value.id, value);
}
})
.groupByKey(new Serdes.IntegerSerde(), new JsonSerde<>(Product.class))
.count("prod-id-count-store")
.toStream()
.map(new KeyValueMapper<Integer, Long, KeyValue<Object, String>>() {
@Override
public KeyValue<Object, String> apply(Integer key, Long value) {
return new KeyValue<>(null, "Count for product with ID 123: " + value);
}
});
}
@Bean
public Foo foo(KStreamBuilderFactoryBean kStreamBuilderFactoryBean) {
return new Foo(kStreamBuilderFactoryBean);
}
static class Foo {
KStreamBuilderFactoryBean kStreamBuilderFactoryBean;
Foo(KStreamBuilderFactoryBean kStreamBuilderFactoryBean) {
this.kStreamBuilderFactoryBean = kStreamBuilderFactoryBean;
}
public Long getProductStock(Integer id) {
KafkaStreams streams = kStreamBuilderFactoryBean.getKafkaStreams();
ReadOnlyKeyValueStore<Object, Object> keyValueStore =
streams.store("prod-id-count-store", QueryableStoreTypes.keyValueStore());
return (Long)keyValueStore.get(id);
}
}
}
static class Product {
Integer id;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}
}

Some files were not shown because too many files have changed in this diff Show More