Compare commits

..

172 Commits

Author SHA1 Message Date
Soby Chacko
74a044b0c0 2.1.0.M3 Release 2018-09-21 11:00:17 -04:00
Soby Chacko
70f385553a Application ID resolution for Kafka streams binder (#450)
* Application ID resolution for kafka streams binder

Avoid the need to rely on group property for application id.
First, check binding specific application id, if not look at defaults.
If nothing works, fall back to the default application id set by the
boot auto configuration.

Modify tests.
Update docs.

Resolves #448

* Addressing PR review comments

* Minor polishing

* Addressing PR review comments
2018-09-21 10:17:34 -04:00
Soby Chacko
f5739a9c7f Missing beans registration in Kafka Streams binder
There are duplicate code in various binder configurations where we register missing beans.
Consolidate them into a common class that implements ImportBeanDefinitionRegistrar and then
import this class in the binder configurations.

Resolves #445
2018-09-20 11:28:32 -04:00
Soby Chacko
39f5490488 Refactor bootstrap server configuration
There is a common piece of code repeated in both producer and consumer
configuration where it is populating the bootstrap server configuration.
Refactoring into a common method.

Resolves #208
2018-09-19 16:15:19 -04:00
Soby Chacko
d445a2bff9 Docs for GlobalKTable binding
Resolves #443
2018-09-19 15:05:48 -04:00
Oleg Zhurakousky
5446485cbf Updating with changes related to core GH-1484
Resolves #447
2018-09-19 10:45:04 +02:00
Soby Chacko
cc61d916b8 Extended default properties
Allow applications to configure default values for extended prducer and
consumer properties across multiple bindings in order to avoid repetition.

Address the changes in both Kafka and Kafka Streams binders.

Add integration test to verify both binding specific and default extended properties.

Resolves #444
Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1477
2018-09-17 20:09:44 -04:00
Soby Chacko
41d9136812 GH-384: Binding support for GlobalKTable
Resolves spring-cloud/spring-cloud-stream-binder-kafka#384

* New binding targets and binder implementation for GlobalKTable within kafka-streams binder
* Refactoring existing structure to accommodate the new binder
* Adding integration test to verify the GlobalKTable behavior

Resolves #384

* Addressing PR review comments

* Update spring-kafka to 2.2.0.M3
Addressing PR review comments
Polishing

* Addressing PR review comments

* Addressing PR review comments
2018-09-14 10:51:15 -04:00
Soby Chacko
71a6a6cf28 Package structure changes for StreamBuilderFactoryBean 2018-09-12 12:05:52 -04:00
Gary Russell
36361d19bf GH-1459: Pollable Consumer and Requeue
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1459

Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1467
2018-09-10 10:22:04 -04:00
Soby Chacko
e869516e3d Handling tombstones in kafka streams binder
Handle tombstones gracefully in the kafka streams binder
Modify tests to verify

Resolves spring-cloud/spring-cloud-stream-binder-kafka#294

* Addressing PR review comments

* Addressing PR review comments
2018-09-05 12:17:54 -04:00
Gary Russell
ea288c62eb GH-435: useNativeEncoding and Transactions
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/435

All common properties should be settable on the transactional producer.
2018-09-04 15:56:26 -04:00
Soby Chacko
afa337f718 Remove deprecations in tests
Resolves #433
2018-09-04 15:28:36 -04:00
Soby Chacko
e4b08e888e Next update version: 2.1.0.BUILD-SNAPSHOT 2018-08-28 10:15:08 -04:00
Soby Chacko
de4b5a9443 2.1.0.M2 Release 2018-08-28 10:04:25 -04:00
Soby Chacko
a090614709 kafka Streams binder configuration fix
(to address core changes made in regards to Boot 2.1)

Ensure that KafkaStreamsBinderSupportAutoConfiguration runs
after BindingServiceConfiguration from core.
2018-08-14 10:33:51 -04:00
Gary Russell
a3f7ca9756 Fix mock to use poll(Duration) 2018-08-07 14:45:33 -04:00
Soby Chacko
82b07a0120 Fixing checkstyle issues 2018-08-07 14:23:47 -04:00
Soby Chacko
0d0cf8dcb7 Update to spring-cloud-build 2.1.0 snapshot
Spring Boot 2.1.0 snapshot
Spring Kafka 2.2.0/3.1.0 snapshots
Apache Kafka client 2.0.0
Fixing tests
Removing deprecations and removals
Polishing

Resolves #424
2018-08-07 14:02:52 -04:00
Soby Chacko
cbfe03be2f Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-03 08:11:52 -04:00
Soby Chacko
1f0c6cabc6 Revert "Kafka Streams binder test disabling"
This reverts commit 0c61cedc85.
2018-08-03 08:10:59 -04:00
Soby Chacko
0c61cedc85 Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-02 12:59:02 -04:00
Soby Chacko
44210f1b72 Update Kafka binder metrics docs
Fix the wrong metric name used in the Kafka binder metrics for consumer offset lag.
Update the description.

Resolves #422
2018-08-02 09:57:27 -04:00
Soby Chacko
7007f9494a JAAS initializer regression
Fix JAAS initializer with setting the missing properties.

Resoves #419

Polishing
2018-07-27 14:34:17 -04:00
Gary Russell
da268bb6dd GH-309: Use actual partition count
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/309

If more partitions exist than those configured, use the actual.

Resolves #416
2018-07-25 20:31:27 +02:00
Soby Chacko
e03baefbaf Kafka streams binder issues in custom environment
When kafka streams binder is used in the context of a custom environment, possibly
for a multi binder use case, it is unable to query the outer context as the normal
parent context is absent. This change will ensure that the binder context has access
to the outer context so that it can use any beans it needs from it in KStream or KTable
binder configuration.

Resolves #411
2018-07-25 13:14:47 -04:00
Gary Russell
01396b6573 GH-413: Configure Kafka Streams Cleanup Behavior
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/413
2018-07-24 16:53:57 -04:00
Soby Chacko
3f009a8267 Autoconfigure optimization
Add spring-boot-autoconfigure-processor to the kafka-streams binder for
auto configuration optimization.

Resolves #406
2018-07-24 15:56:48 -04:00
Gary Russell
31bb86b002 GH-404: Synchronize shared consumer
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/404

The fix for issue https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/231
added a shared consumer but the consumer is not thread safe. Add synchronization.

Also, a timeout was added to the `KafkaBinderHealthIndicator` but not to the
`KafkaBinderMetrics` which has a similar shared consumer; add a timeout there.
2018-07-11 13:30:24 -04:00
Soby Chacko
cc2dfd1d08 Upgrade kafka client to 1.1.0 (#405)
* Upgrade kafka client to 1.1.0

Upgrade kafka client to 1.1.0 for both kafka and kafka-streams binders

Resolves #370

* Address review comments
2018-07-09 17:37:58 -04:00
jmaxwell
fc768ba695 GH-402 Add additional data to DLQ message headers 2018-07-02 11:34:46 -04:00
Soby Chacko
38d6deb4d5 Provide programmatic access to KafkaStreams object
Providing access to the underlying StreamBuilderFactoryBean by making the bean name
deterministic. Eariler, the binder was using UUID to make the stream builder factory
bean names unique in the event of multiple StreamListeners. Switching to use the
method name instead to keep the StreamBuilder factory beans unique while providing
a deterministic way to giving it programmatic access.

Polishing docs

Fixes #396
2018-06-29 16:45:55 -04:00
Soby Chacko
2a0b9015de Next update version: 2.1.0.BUILD-SNAPSHOT 2018-06-27 14:11:40 -04:00
Soby Chacko
321919abc9 2.1.0.M1 2018-06-27 13:55:58 -04:00
Soby Chacko
b09def9ccc Polishing Kafka Streams binder docs
Resolves #390
2018-06-27 11:25:22 -04:00
Soby Chacko
54d7c333d3 Interactive query - polishing
Renaming InteractiveQueryServices to InteractiveQueryService
2018-06-27 11:03:46 -04:00
Soby Chacko
015b1a7fa1 Fix bad link in docs
Resolves #359
2018-06-27 10:14:17 -04:00
Soby Chacko
020821f471 Fix typo in kafka streams docs
Resolves #400
2018-06-27 09:08:21 -04:00
UltimaPhoenix
2e14ba99e3 Fix unit test
Remove unnecessary semicolon

Replace deprecated method with the new one

Test refactoring
2018-06-27 08:56:48 -04:00
Oleg Zhurakousky
bd5cc2f89d GH-398 added support for container customization
PLease see https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/139 for more details

Resolves #398

Added test and polishing
2018-06-27 08:45:31 -04:00
Gary Russell
ca2983c881 GH-373: Support multiplexed consumers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/373

When a consumer is multiplexed, configure the container to listen to multiple topics.
Also for the polled consumer.

When using a DLQ, determine the queue name from the topic in the failed record (unless
an explicit DLQ name has been provisioned - in which case, the same DLQ will be used
for all topics.

Resolves #383
2018-06-12 14:52:51 -04:00
Soby Chacko
1b00179f25 Kafka Streams interactive query enhancements
* When running interactive queries against multiple instances under the same application id,
  ensure that the application can retrieve the proper instance that is hosting the queried state store
* Introduce a new API level service called InteractiveQueryServices
* Perform refactoring to support this enhancement
* Deprecate QueryableStoreRegistry in 2.1.0 in favor of InteractiveQueryServices

Resolves #369
2018-06-08 11:22:31 -04:00
Artem Bilan
f533177d21 GH-381: Remove duplicated SCSt-binder-test dep
Fixes spring-cloud/spring-cloud-stream-binder-kafka#381
2018-05-14 15:25:36 -04:00
Gary Russell
990a5142ce GH-339: Support topic patterns
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/339

- support regex patterns for consumer destinations to consume from multiple topics
- `enableDlq` is not available in this mode

As well as the test case, tested with a boot app...

```
spring.cloud.stream.bindings.input.destination=kbgh339.*
spring.cloud.stream.kafka.bindings.input.consumer.destination-is-pattern=true
```

and

```
2018-04-30 15:12:49.718  : partitions assigned: [kbgh339a-0]
2018-04-30 15:17:46.585  : partitions revoked: [kbgh339a-0]
2018-04-30 15:17:46.655  : partitions assigned: [kbgh339a-0, kbgh339b-0]
```

after adding a new topic matching the pattern.

Doc polishing.
2018-05-10 11:10:49 -04:00
Soby Chacko
13693e8e66 Kafka Streams DLQ related changes
DLQ handling needs to be adjusted in kafka streams binder due to the multiplexing of input topics.
This commit changes it accordingly in KStream and KTable binders.
Add tests to verify.
2018-05-01 12:02:53 -04:00
Soby Chacko
70cd7dc2f9 Upgrade spring cloud stream version to 2.1.0 sanpshot
Always enable multiplex to true in kafka streams binder
2018-05-01 10:10:42 -04:00
Sarath Shyam
894597309b Fix kaka-streams binder to consume messages from multiple input topics #361
Modified KafkaStreamsStreamListenerSetupMethodOrchestrator#getkStream
so that the KStream object is built from list of topic names
2018-04-30 15:05:47 -04:00
Thomas Cheyney
74689bf007 Reuse Kafka consumer Metrics
Polishing
2018-04-30 14:19:53 -04:00
Gary Russell
d2012c287a GH-360: Improve Binder Producer/Consumer Config
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/360

`producer-configuration` and `consumer-configuration` improperly appear in content-assist.

These are methods used by the binders to get merged configuration data (boot and binder).

Rename the methods and add `producerProperties` and `consumerProperties` to allow
configuration.
2018-04-27 12:37:48 -04:00
slamhan
109295464f QueryableStore retrieval stops at InvalidStateStoreException
If there are multiple streams, there is a code path that throws
a premature InvalidStateStoreException. Fixing that issue.

Fixes #366

Polishing.
2018-04-24 11:13:25 -04:00
Lei Chen
cf41a8a4eb Allow Kafka Streams state store creation
* Allow Kafka Streams state store creation when using process/transform method in DSL
 * Add unit test for state store
 * Address code review comments
 * Add author and javadocs
 * Integration test fixing for state store
 * Polishing
2018-04-20 15:53:43 -04:00
Soby Chacko
77540a2027 Kafka Streams initializr image for docs 2018-04-12 17:51:40 -04:00
Danish Garg
a2592698c9 Changed occurances of map calls on kafka streams to mapValues
Resolves #357
2018-04-11 15:19:21 -04:00
Oleg Zhurakousky
5543b4ed1e Post-release update to 2.1.0.BUILD-SNAPSHOT 2018-04-06 15:17:12 -04:00
Oleg Zhurakousky
acb8eef43b 2.0.0.RELEASE 2018-04-06 14:17:24 -04:00
Soby Chacko
b3f8cf41ef 2.0 release updates 2018-04-06 13:00:21 -04:00
Gary Russell
cbf693f14e Fix Stub in mock tests to return the group id
Needed for SK 2.1.5.
2018-04-06 12:44:24 -04:00
Gary Russell
39dd048ee5 Fix DLQ and raw/embedded headers
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/351

- DLQ should support embedded headers for backwards compatibility with 1.x apps
- DLQ should support `HeaderMode.none` for when using older brokers with raw data

Forward port of https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/350

Resolves #352
2018-04-06 10:56:15 -04:00
Soby Chacko
0689e87489 Polishing kafka streams docs 2018-04-04 11:25:18 -04:00
Soby Chacko
2c3787faa1 Kafka streams outbound converter changes
Use getMessageConverterForAllRegistered() from CompositeMessageConverterFactory

Resolves #353
2018-04-02 17:19:19 -04:00
Soby Chacko
84f0fb28ae Address IllegalAccessException in Kafka Streams binder
When StreamListener methods are contained in a top level non-public class, Kafka
Streams binder throws an IllegalAccessException. Fixing it by making it accessible.

Resolves #348
2018-04-02 15:32:08 -04:00
Soby Chacko
710ff2c292 Fix NPE in Kafka Streams binder
* Fix NPE in Kafka Streams binder

Fix NPE when user provided consumer properties are missing in kafka streams binder

Resolves #343

* Addressing PR review comments
2018-03-28 13:32:37 -04:00
Oleg Zhurakousky
11a275a299 Merge pull request #342 from bewithvk/patch-1
Fix the broken image link of kafka-binider.
2018-03-24 08:00:28 -04:00
bewithvk
3526a298c8 Fix the broken image link of kafka-binider. 2018-03-21 10:45:43 -05:00
Jay Bryant
e152d0c073 Correcting my own error
I inadvertently added a word (I had two sentences in mind and got parts of both of them).

Resolves #341
2018-03-21 11:01:36 -04:00
Jay Bryant
976b903352 Incorporated feedback from Gary Russell
Gary caught a grammatical goof and pointed out some content that can be removed, because Kafka now has a feature it didn't use to have. That prompted me to rewrite the leader paragraph above that content, too.

Thanks, Gary.
2018-03-20 12:20:15 -05:00
Jay Bryant
9861c80355 Full editing pass for Spring Cloud Stream Kafka Binder
I corrected grammar and spelling and edited for a corporate voice. I also add a link or two.
2018-03-20 09:19:29 -05:00
Soby Chacko
7057e225df Back to 2.0.0.BUILD-SNAPSHOT 2018-03-12 16:31:48 -04:00
Soby Chacko
b8267ea81e 2.0.0.RC3 Release 2018-03-12 15:23:09 -04:00
Gary Russell
2b595b004f GH-337: Add ackEachRecord property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/337
Resolves #338
2018-03-10 11:02:50 -05:00
Gary Russell
de45edc962 Add missing binder headerMapperBeanName to doc. 2018-03-08 11:46:03 -05:00
Gary Russell
2406fe5237 Event Publisher Polishing
Now that the abstract binder makes its event publisher available to subclasses,
use it, if present, instead of the application context.
In most cases, they will be the same object, but the user might override the
publisher.

Resolves #336
2018-03-08 09:23:14 -05:00
Oleg Zhurakousky
b5a0013e1e GH-330 Polishing
Resolves #330
Resolves #334
2018-03-08 09:11:11 -05:00
Gary Russell
c814ad5595 GH-330: Kafka Topic Provisioning Improvements
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/330

- allow override of binder-wide `replicationFactor` for each binding
- allow specific partition/replica configuration
- allow setting `NewTopic.configs()` properties, similar to the consumer and producer
- use a new `AdminClient` for provisioning (and `close()` it) instead of keeping a long-lived connection open.
2018-03-08 09:10:27 -05:00
Oleg Zhurakousky
def2c3d0ed SCST-GH-1259 Polishing
- added '@DeprecatedConfigurationProperty'
- minor doc polishing

Resolves #335
2018-03-07 16:04:44 -05:00
Gary Russell
10a44d1e44 SCST-GH-1259: Kafka Binder Doc Polishing
Fixes https://github.com/spring-cloud/spring-cloud-stream/issues/1259

Also deprecate properties that are no longer used.

Missed a save.
2018-03-07 16:03:36 -05:00
Oleg Zhurakousky
0de078ca48 GH-326 added KafkaAutoConfiguration to KafkaBinderConfiguration
- added KafkaAutoConfiguration to the @Import of KafkaBinderConfiguration
- removed 'optional' flag for KafkaProperties from KafkaBinderConfigurationProperties
- fixed KafkaBinderAutoConfigurationPropertiesTest

Resolves #326
Resolves #333
2018-03-06 13:30:41 -05:00
Gary Russell
8035e25359 GH-67: Workaround for SK GH-599
Fixes #67

Spring Kafka currently doesn't support `TPIO.SeekPosition` for initial offsets.
Instead, use 0 and `Long.MAX_VALUE` for `BEGINNING` and `END` respectively.

Resolves #331
2018-03-06 13:16:53 -05:00
Artem Bilan
bcf15ed3be GH-328: Make KafkaBinderMetrics bean conditional
Fixes: spring-cloud/spring-cloud-stream-binder-kafka#328

Since we consider a Micrometer dependency as an optional, it would be
better do not expose beans which depends of that library

* Move `KafkaBinderMetrics` to its own `@Configuration` class with
appropriate conditions on the classpath and beans presence
* Add an `ApplicationContextRunner`-based test-case to achieve a
condition when Micrometer is not in classpath via `FilteredClassLoader`
hook

Resolves #328
Resolves #332
2018-03-06 12:40:16 -05:00
Gary Russell
d37ef750ad GH-67: Reinstate resetOffsets property
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/67

Currently only supported with group management (`autoRebalanceEnabled` - default true).

See https://github.com/spring-projects/spring-kafka/issues/599

Resolves #67
Resolves #329
2018-03-05 20:50:17 -05:00
Oleg Zhurakousky
d8baca5a66 Polishing
Resolves #325
2018-03-05 20:03:24 -05:00
Jon Schneider
b9d7f1f537 Change KafkaBinderMetrics to a tagged TimeGauge 2018-03-05 19:56:12 -05:00
Oleg Zhurakousky
ad819ece92 GH-322 added @Conditional for 'KafkaBinderHealthIndicator' bean 2018-03-02 09:08:39 -05:00
Soby Chacko
e254968eaf Next version: 2.0.0.BUILD-SNAPSHOT 2018-03-01 09:30:11 -05:00
Soby Chacko
25a64e1d85 2.0.0.RC2 Release 2018-03-01 09:15:56 -05:00
Oleg Zhurakousky
353c89ab63 GH-250 Reworked binding event propagation
- Hooked up to the new `BindingCreatedEvent`

Simple polishing

Fixes spring-cloud/spring-cloud-stream-binder-kafka#250
2018-02-28 15:47:47 -05:00
Artem Bilan
227d8f39f6 GH-318: Fix KafkaBinderMetrics for Micrometer
Fixes spring-cloud/spring-cloud-stream-binder-kafka#318

* Use `ToDoubleFunction`-based `MeterRegistry.gauge()` variant to really
calculate a `lag` at runtime instead of statically defined before
* Add `KafkaBinderActuatorTests` integration test to demonstrate how
`Binder` is declared in the separate child context and how
`KafkaBinderMetrics` is not visible from the parent context.
This test also verify the real `gauge` value for the `consumer lag`
and should be used in the future to verify the `KafkaBinderMetrics`
exposure removing the code after TODO in the `KafkaMetricsTestConfig`
2018-02-27 18:51:24 -05:00
Oleg Zhurakousky
555d3ccbd8 GH-319 bumped up SIK and SK versions
Resolves #319
2018-02-27 11:30:36 -05:00
Oleg Zhurakousky
d7c5c9c13b GH-1211 made Actuator optional 2018-02-26 23:03:39 -05:00
Sabby Anandan
e23af0ccc1 Revise Kafka Streams docs (#317)
* Revise Kafka Streams docs

* Remove KStream starter-pom reference

* Remove KStreams from aggregate docs
2018-02-26 19:14:54 -05:00
Soby Chacko
3fd93e7de8 Kafka Streams binder autoconfiguration changes
Make KafkaStreamsBinderSupportAutoConfiguration conditional
on BindingSerive being present in the BeanFactory.
2018-02-23 16:09:58 -05:00
Soby Chacko
90a778da6a Next version: 2.0.0.BUILD-SNAPSHOT 2018-02-23 12:53:17 -05:00
Soby Chacko
99ff426b92 2.0.0.RC1 Release 2018-02-23 12:26:44 -05:00
Rafal Zukowski
0a59b4c628 requestedAcks propoerty changed to string to allow setting it to "all"
polishing
2018-02-22 17:26:42 -05:00
Artem Bilan
a654382e99 Upgrade to SK-2.1.3 and SIK-3.0.2 2018-02-22 16:40:56 -05:00
Soby Chacko
9a4e86a750 KafkaBinderConfigurationProperties duplicate bean
ConfigurationProperties bean provided by Kafka Streams binder
extends from `KafkaBinderConfigurationProperties` used by Kafka binder.
It creates a conflict when autowiring this bean from Kafka binder configuration.
This prevents an application to have both binders in the classpath.
Change the creation of this ConfigurationProperties bean so that it
avoids creating bean using EnbaleConfigurationProperties and then autowiring,
but directly create the Bean using `@Bean`. This prevents the conflict.

Resolves #244
Resolves #315
2018-02-22 08:54:00 -05:00
Nikem
addb40bab5 Added spring-boot-configuration-processor to generate @ConfigurationProperties metadata
Resolves #310
2018-02-16 08:46:45 -05:00
Soby Chacko
cdffcd844c Kafka Streams binder docs updates
Refactoring kafka streams docs into a separate module

Resolves #293
2018-02-16 08:35:52 -05:00
Soby Chacko
a5344655cb Kafka Streams binder name changes
- Rename spring-cloud-stream-binder-kstream to spring-cloud-stream-binder-kafka-streams
 - Corresponding changes in maven pom.xml files
 - Rename relevant classes to prefix with KafkaStreams instead of KStream
 - Corresponding package changes from org.springframework.cloud.stream.kstream to
   org.springframework.cloud.stream.kafka.streams
 - Organize all the configuration property classes in a properties package
 - Remove kstream from all the properties exposed by Apache Kafka Streams binder
 - Classes that need not be public are now moved to package access level
 - Test changes
 - More javadocs to classes

Resolves #246
2018-02-14 14:47:34 -05:00
Gary Russell
72e2aeec2a GH-305: Fix producer initiated transactions
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/305
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/308

Requires https://github.com/spring-projects/spring-integration-kafka/pull/196

`ProducerConfigurationMessageHandler` overrode `handleMessageInternal` to support
producer-initiated transactions. Now that the superclass extends `ARPMH`, this
method is no longer overridable.

Spring Integration Kafka's `KafkaProducerMessageHandler` now detects a transactional
`KafkaTemplate` and will automatically start a transaction if needed, so there is no
longer any need to override that method.

No additional tests needed; see `KafkaTransactionTests`.
2018-02-14 10:17:35 -05:00
Soby Chacko
090125fa71 Multiple Input bindings on same StreamListener method
- In kafka streams applications, it is essential to have multiple bindings for
    various target types such as KStream, KTable etc. These changes allow to have
    more than one type of target type bindings on a single StreamListenerMethod.
  - Currently support KStream and KTable target types
  - Refactoring the KStream listener orchestrator strategy
  - Input bindings are initiated through a proxy and later on wrapped with the
    real target created from a common Kafka Streams StreamsBuilder
  - Adding tests to verify multiple input bindings for KStream and KTable

  Resolves #298
  Resolves #303
Polishing

Materializing KTables as state stores
Void return types on kafka streams StreamListener methods
Polishing
2018-02-10 12:16:23 -05:00
Oleg Zhurakousky
0fcb972cb6 GH-301 polished POM
Resolves #302
2018-02-10 12:05:28 -05:00
Gary Russell
dbe19776f5 GH-301: Fix Binder Kafka Properties Overrides
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/301

For the `AdminClient`, arbitrary Kafka properties (set via the binder `configuration` property) should
supersede any boot properties. There was already special handling for the bootstrap servers, but
other arbitrary properties were ignored.

Add tests, including a test to verify the proper override of boot's broker list if appropriate.
2018-02-08 16:54:32 -05:00
Soby Chacko
f44923e480 Introducing error handling for Kafka Streams binder
- Add support for KIP-161: streams deserialization exception handlers
 - Provide out of the box LogAndContinue and LogAndFail exception handlers
 - Introduce a new exception handler that sends records in error on
   deserialization to a DLQ
 - Ensure that the exception handlers work in both when native decoding
   is enabled (Kafka is doing deserialization) and when native decoding
   is disabled (Binder is doing deserialization)
 - Enhancements on the logic of native decoding in binder.
 - General refactoring and cleanup
 - Adding more tests

 Resolves #275, #281

Introducing multiple StreamListener methods in Kafka Streams apps.

 - Multiple stream listener methods now have individual backing
   StreamBuilder and configuration.
 - Register StreamsBuilderFactoryBean programmatically

 Resolves #299, #300
2018-02-07 05:29:47 -06:00
Soby Chacko
b31f902905 Back to 2.0.0.BUILD-SNAPSHOT 2018-02-05 13:56:31 -05:00
Soby Chacko
13cc4e0155 Update for 2.0.0.M4 Release 2018-02-05 13:41:45 -05:00
Soby Chacko
5e0688df72 Doc structure changes
Introduce a top level doc that aggregates various sub docs
2018-02-05 12:59:18 -05:00
Oleg Zhurakousky
abd43cfffa fixed KafkaBinderTest to accomodate DefaultPollableMessageSource's signature change 2018-02-04 13:30:41 -05:00
Oleg Zhurakousky
8057b1f764 GH-295 Fixed tests to comply with double-conversion related changes in core
See https://github.com/spring-cloud/spring-cloud-stream/issues/1130
See https://github.com/spring-cloud/spring-cloud-stream/issues/1071

Resolves #295
2018-02-03 08:06:37 -05:00
Soby Chacko
78ae3d1867 Update dependencies
Spring Kafka to 2.1.2.RELEASE
Spring Integration Kafka to 3.0.1.RELEASE
Test compile fixes from micrometer class changes in KafkaMetrics tests
2018-02-02 13:32:02 -05:00
Soby Chacko
a7378c9132 Addressing PR review comments
Resolves #274
2018-01-24 10:51:37 -05:00
Soby Chacko
9500ccfe46 KStream binder - nativeEncoding and Serde changes 2018-01-24 10:07:31 -05:00
Soby Chacko
3a5aed61c9 Redesigning the branching support in KStream binder
Instead of relying on a property based approach, use proper
multiple output bindings to support the branching feature.
This is accomplished through a combination of using `SendTo`
annotation with multiple outputs and overriding the default
StreamListener method setup orchestration in the binder.
2018-01-24 10:07:31 -05:00
Soby Chacko
86146bfe81 Support for KStream branch and split
Resolves #265

- Ensure that branching can be done and any message conversions
  done on all outbound data to multiple topics
- Refine MessageConversion logic in KStream binder
- Refactor message conversion code into a specific class
- Cleanup and refactoring
- Test to verify branching support
- More refinements in the way nativeEncoding/nativeDecoding logic works in KStream binder
- Doc changes for KStream binder
2018-01-24 10:07:31 -05:00
Gary Russell
f2abb59b19 Fix Polling Consumer Tests
- Add a sleep to the while loop
- Unbind at the end of the test

Resolves #289
2018-01-24 10:01:02 -05:00
Gary Russell
36c39974ad Fix default partition header expression 2018-01-22 13:35:24 -05:00
Gary Russell
16a195a887 GH-277: Add Polled Consumer
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/277

Resolves #282
2018-01-18 12:59:12 -05:00
Oleg Zhurakousky
bc562e3a77 Polished KafkaBinderTests
polished KafkaBinderTests to account for changes in core related to partitioning - 00748985d6

Resolves #272
2018-01-17 15:52:23 -05:00
Gary Russell
39dd68a8ad SCST-GH-1166: Support Producer-initiated tx
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1166

Previously, transactions were only supported if initiated by a consumer.

Also fix races in `KafkaBinderTests.testResume()` (messages sent before subscribing).
2018-01-17 14:27:32 -05:00
Oleg Zhurakousky
8daaed43a9 upgraded to spring-kafka 2.1.1.BUILD-SNAPSHOT 2018-01-12 09:17:29 -05:00
Oleg Zhurakousky
d13d92131c GH-279 added Log4J-1 support 2018-01-09 15:46:26 -05:00
Oleg Zhurakousky
0b4ccdefce Added test dependency on log4j:log4j since it is still required by Kafka
Resolves #262
2018-01-03 19:22:18 -05:00
Gary Russell
ae269e729b GH-261: Support idle container events
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/261

This facilitates pause/resume.
2018-01-03 19:22:18 -05:00
Soby Chacko
9182adcf56 KStream binder outbound keys changes
The serializer can fall back to the default common one if there is not
a more specific one provided.

Resolves #271
2018-01-03 14:42:18 -05:00
Soby Chacko
0b9e211e27 Kafka Streams binder cleanup
Aligning semantics of native encoding with core spring cloud stream.
With this change, if nativeEncoding is set, any message conversion is
skipped by the framework. It is up to the application to ensure that
the data sent on the outbound is of type byte[], otherwise, it fails.
If the nativeEncoding is false, which is the default, the binder does
the message conversion before sending the data on the outbound.

Corresponding changes in tests

Resolves #267
2017-12-29 16:16:47 -05:00
Gary Russell
a7fe39fef5 Fix Mixed Header Mode Tests
Resolves spring-cloud/spring-cloud-stream#1159

- interceptor was missing

* Fix import order

* Add eclipse import order prefs
2017-12-22 09:48:11 -05:00
Soby Chacko
50b8955dfc Upgrade to Spring Kafka 2.1, Kafka 1.0.0
Resolves #259

- Remove the kafka server dependency from build/binder artifact
- Remove AdminUtilOperation and KafkaAdminUtilOperation
- Rely on AdminClient for provisioning operations
- Update KStream components with the new changes
- Test updates
- Polishing

Add timeout to all the blocking AdminClient calls

Addressing PR review comments

Addressing PR review comments

Addressing PR review

Update SK, SIK to 2.1.0.RELEASE and 3.0.0.RELEASE respectively
Update Kafka Streams class name changes
2017-12-01 11:24:09 -05:00
Oleg Zhurakousky
9a02fa69ac Added Log4J-1 dependency
Added Log4J-1.2.17 dependency since it has been removed by boot yet it is still required by Kafka
2017-11-28 15:47:36 -05:00
Gary Russell
77cbfe2858 GH-251: Configurable inbound adapter converter
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/251

When using native decoding, the message emitted by the binder has no `id` or `timestamp`
headers. This can cause problems if the first SI endpoint uses a JDBC message store,
for example.

Add the ability to configure whether these headers are populated as well as the ability
to inject a custom message converter.
2017-11-15 15:10:06 -05:00
Gary Russell
5db3ede9c4 SCST-GH-1009: Producer properties dynamic bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1009

Polishing
2017-11-15 15:04:27 -05:00
Soby Chacko
e59cb569a6 Fix failing integ test on CI 2017-11-15 13:50:40 -05:00
Soby Chacko
b0c4f0cfcd Addressing PR review comments
Resolves #258
2017-11-15 06:54:58 -05:00
Soby Chacko
2f6d5df7bc Addressing PR review comment for gh-58
- Add a check for ignoring dlq producer properties set by the user
  if transaction management is enabled
- Polishing
2017-11-14 14:59:28 -05:00
Soby Chacko
66f194dd93 Configure dlq producer properties
Fixes #58
Resolves #257

- add the ability to configure the dlq producer properties
- new property on KafkaConsumerProperties for dlqProducerProperties
- dlq sender refactoring in Kafka binder
- make dlq type raw so that non byte[] key/payloads can be sent
- add new tests for verifying dlq producer properties
2017-11-14 08:56:41 -05:00
Soby Chacko
4cb49f9ee4 Fix failing integ test
Fix a test that was failing from an unrelated change made around
content type in the message header
2017-11-13 16:48:04 -05:00
Gary Russell
035cc1a005 GH-228: Enhance DLQ messages with failure info
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/228
2017-11-09 16:15:07 -05:00
Soby Chacko
b3f42fe67d Update to next version: 2.0.0.BUILD-SNAPSHOT
Update spring-cloud-build parent to 2.0.0.BUILD-SNAPSHOT
2017-11-09 09:41:56 -05:00
Soby Chacko
a9f40ac084 Update to release version: 2.0.0.M3
spring-cloud-build parent to 2.0.0.M4
2017-11-09 09:08:31 -05:00
Gary Russell
db02abe531 Kafka 0.11 Binder Properties
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/226

- headerPatterns
- transaction.*
2017-11-08 12:07:56 -05:00
Soby Chacko
f1dc14b5c3 Fixing tests from content type changes in core
Fixes #248

Polishing
2017-11-07 14:21:46 -05:00
Gary Russell
50ce8ca2ba Fix doc typo. 2017-10-31 11:49:37 -04:00
Oleg Zhurakousky
6a312592a4 polishing, fixed style errors and import order
Resolves #237
Resolves #236
2017-10-24 15:25:23 -04:00
Gary Russell
43d786f701 GH-236: Backwards Compatibility
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/236

- Restore the mapped headers configuration for embedding headers
- Configure the header mapper based on the header mode
- On the outbound side, set the mapper to `null` unless the header mode is not null and not `headers`
  this suppresses the channel adapter from setting up headers - for compatibility with < 0.11 brokers
- On the inbound side, set the `BinderHeaders.NATIVE_HEADERS_PRESENT` header if native headers found
- Add a configuration capability allowing the user to provide his/her own header mapper

- Add a test case with 3 producers (embedded, native, and no headers) and a consumer to verify it can
  consume all 3

Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1107

Polishing - fix else clause when configuring the header mapper.
2017-10-24 15:24:17 -04:00
Laur Aliste
68811cad28 GH-231: reuse Kafka consumer HealthIndicator
- lazily create Consumer in KafkaBinderHealthIndicator.

- polishing
2017-10-19 16:05:28 -04:00
Soby Chacko
4382dab8f8 Resetting spring cloud stream version to 2.0.0.BUILD-SNAPSHOT 2017-10-19 12:43:01 -04:00
Soby Chacko
20a8158a56 Next update version: 2.0.0.BUILD-SNAPSHOT 2017-10-19 10:02:18 -04:00
Soby Chacko
3ad0d7c465 Update release versions
2.0.0.M2

Closes #235
2017-10-19 09:13:09 -04:00
Gary Russell
5b3974c932 GH-224: Documentation for Kafka partitioning
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/224
2017-10-18 18:39:44 -04:00
Soby Chacko
3c7615f7a3 Test structure improvments
Remove the separate test module introduced in 1.x to test different versions of Kafka.
In 2.0, there is a single Kafka version that needs to be tested.
Move all the tests from the test module to the main binder module.
Remove the confluent schema registry integration test from the binder tests as it
will be ported as a sample application. This test currently does the serialziation/deserialition twice.

Fix #200
2017-10-18 18:24:03 -04:00
Soby Chacko
8ae0157135 Fix Kafka binder metrics docs
Fixing doc updates missed during rebase
2017-10-16 16:51:48 -04:00
Gary Russell
08658ffa6c GH-223: Deserialization with native encoding
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/223

Add test case; stream fix: https://github.com/spring-cloud/spring-cloud-stream/pull/1095

* Some minor test code polishing

# Conflicts:
#	pom.xml
#	spring-cloud-stream-binder-kafka/src/test/java/org/springframework/cloud/stream/binder/kafka/AbstractKafkaBinderTests.java

* `toString()` for `contentType` header since it is `MimeType` now in SCSt-2.0
2017-10-10 15:25:21 -04:00
Soby Chacko
8d797deaf9 Remove module: spring-cloud-stream-binder-kafka-0.10.2-test 2017-10-05 13:22:31 -04:00
Soby Chacko
561b4b7e73 Checkstyle fixes
Making proxy interface public in KStream binder
General cleanup
2017-10-05 13:16:01 -04:00
Gary Russell
93fdd2ef0f Update to SK 2.0.1.BUILD-SNAPSHOT 2017-10-05 11:54:23 -04:00
Artem Bilan
fd48a1d0eb Fix KafkaMessageChannelBinder errors
https://jenkins.spring.io/blue/organizations/jenkins/spring-cloud-stream-binder-kafka-2.0.x-ci/detail/spring-cloud-stream-binder-kafka-2.0.x-ci/15/pipeline
2017-10-05 11:54:23 -04:00
Soby Chacko
a07a0017bb GH-193: Make 2.0 branch up to date
fixes spring-cloud/spring-cloud-stream-binder-kafka#193

Integration missed commits and provide some polishing, improvements and fixes

Remove `resetOffsets` option

Fix #170

Use parent version for spring-cloud-build-tools

Add update version script

Fixes for consumer and producer property propagation

Fix #142 #129 #156 #162

- Remove conditional configuration for Boot 1.4 support
- Filter properties before creating consumer and producer property sets
- Restore `configuration` as Map<String,String> for fixing Boot binding
- Remove 0.9 tests

SCSt-GH-913: Error Handling via ErrorChannel

Relates to spring-cloud/spring-cloud-stream#913

Fixes #162

- configure an ErrorMessageSendingRecoverer to send errors to an error channel, whether or not retry is enabled.

Change Test Binder to use a Fully Wired Integration Context

- logging handler subscribed to errorChannel

Rebase; revert s-k to 1.1.x, Kafka to 0.10.1.1

Remove dependency overrides.

POM structure corrections

- move all intra-project deps to dependency management
- remove redundant overrides of Spring Integration Kafka

Remove reference to deleted module

- `spring-cloud-stream-binder-kafka-test-support` was previously
   removed, but it was still added as an unused dependency to the
   project

Remove duplicate debug statement.

unless you really really want to make sure users see this :)

GH-144: Add Kafka Streams Binder

Fix spring-cloud/spring-cloud-stream-binder-kafka#144

Addressing some PR reviews

Remove java 8 lambada expressions from KStreamBoundElementFactory

Initial - add support for serdes per binding

Fixing checkstyle issues
test source 1.8

Convert integration tests to use Java 7

Internal refactoring

Remove payload serde code in KStreamBoundElementFactory and reuse it from core

Addressing PR comments

cleanup around payload deserialization

Update to latest serialization logic

Extract common properites class for KStream producer/consumer

Addressing PR review comments

* Remove redundant dependencies for KStream Binder

Documentation for KStream binder

* Documentation for KStream binder

Fix #160

* Addressing PR review comments

* Addressing PR review comments

* Addressing PR review comments

Fixes #181

SCSt-GH-916: Configure Producer Error Channel

Requires: https://github.com/spring-cloud/spring-cloud-stream/pull/1039

Publish send failures to the error channel.

Add docs

Revert to Spring Kafka 1.1.6

GH-62: Remove Tuple Kryo Registrar Wrapper

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/62

No longer needed.

GH-169: Use the Actual Partition Count (Producer)

Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/169

If the configured `partitionCount` is less than the physical partition count on an existing
topic, the binder emits this message:

    The `partitionCount` of the producer for topic partJ.0 is 3, smaller than the actual partition count of 8 of the topic.
    The larger number will be used instead.

However, that is not true; the configured partition count is used.

Override the configured partition count with the actual partition count.

0.11 Binder

Initial Commit

- Transactional Binder

Version Updates

- Headers support

KStreams and 0.11

GH-188: KStream Binder Properties

KStream binder: support class for application level properties

Provide commonly used KStream application properties for convenient access at runtime

Fix #188

Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.

Kstream binder: producer default Serde changes

Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.

Fix #190

KStream binder cleanup,
merge cleanup

re-update kafka version

2.0 related changes

Fix tests
Upgrade Kstream tests

converting anonymous classes to lambda expressions

Renaming Kafka-11 qualifier from test module
Refactoring test class names

cleanup
adding .jdk8 files

Fix KafkaBinderMetrics in 2.0

Fix #199

Addressing PR review comments

Addressing PR review comments
2017-10-05 11:54:23 -04:00
Marius Bogoevici
62b40b852f Set version to 2.0.0.BUILD-SNAPSHOT 2017-10-05 11:19:26 -04:00
Marius Bogoevici
c396c5c756 Release 2.0.0.M1 2017-10-05 11:18:34 -04:00
Marius Bogoevici
b20f4a0e08 Re-add Spring Kafka version 2017-10-05 11:17:55 -04:00
Marius Bogoevici
77f4bc3fb8 Use Spring Boot and dependencies provided by Spring Cloud Build 2017-10-05 11:17:55 -04:00
Marius Bogoevici
2aa8e9eefa Update version to 2.0.0.BUILD-SNAPSHOT
- Update Spring Boot to version 2.0.0.BUILD-SNAPSHOT
- Set Kafka version to 0.10.2
- Remove tests for Kafka 0.9 and 0.10.0
2017-10-05 11:17:55 -04:00
Gary Russell
e3460d6fce Update POMs to 1.3.1.BUILD-SNAPSHOT 2017-09-29 16:25:08 -04:00
Gary Russell
29bb8513c0 Update POMs to 1.3.0.RELEASE 2017-09-29 15:57:53 -04:00
Gary Russell
69227166c7 GH-215: Add timeout to health indicator
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/215

* Shutdown the executor.

* Polishing - PR Comments

* Re-interrupt thread.

* More Polishing
2017-09-29 14:47:45 -04:00
Gary Russell
4ff4507741 GH-206: Close Consumer/Producer in provisioning
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/206

Close the consumer and producer after retrieving the current partition count.

**Cherry pick/back port to 0.11 and 1.2.x, 2.0.x**

* Destroy the Producer Factory
2017-09-28 10:51:15 -04:00
Soby Chacko
f2e1b63460 GH-201: Move metrics doc to the main overview doc
Fixes #201
2017-09-27 12:57:19 -04:00
Soby Chacko
73f1ed9523 Separate each sentence in metrics docs 2017-09-26 20:02:05 -04:00
Soby Chacko
dc7662e17d Add missing documentation for windowing properties
Cleanup test

Fix #196
2017-09-26 16:31:42 -04:00
Gary Russell
b76fff31b8 Back to 1.3.0.BUILD-SNAPSHOT 2017-09-13 11:10:07 -04:00
Gary Russell
1f4f0c3858 Update POMs to 1.3.0.RC1; s-c-build to 1.3.5
Also SIK 2.1.2.RELEASE
2017-09-13 09:39:36 -04:00
Soby Chacko
1aecd02404 Kstream binder: producer default Serde changes
Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.

Fix #190
2017-09-12 12:40:42 -04:00
Soby Chacko
6485bd2abd GH-188: KStream Binder Properties
KStream binder: support class for application level properties

Provide commonly used KStream application properties for convenient access at runtime

Fix #188

Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.
2017-09-12 12:40:00 -04:00
191 changed files with 12788 additions and 5616 deletions

0
.jdk8 Normal file
View File

View File

@@ -21,7 +21,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<url>http://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -29,7 +29,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<url>http://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -37,7 +37,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<url>http://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -47,7 +47,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<url>http://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -55,7 +55,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<url>http://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,6 +1,6 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -192,7 +192,7 @@
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

2
mvnw vendored
View File

@@ -8,7 +8,7 @@
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an

2
mvnw.cmd vendored
View File

@@ -7,7 +7,7 @@
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an

174
pom.xml
View File

@@ -1,42 +1,40 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka11-parent</artifactId>
<version>1.3.1.BUILD-SNAPSHOT</version>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M3</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>1.3.5.RELEASE</version>
<version>2.1.0.M2</version>
<relativePath />
</parent>
<properties>
<java.version>1.7</java.version>
<kafka.version>0.11.0.0</kafka.version>
<spring-kafka.version>1.3.2.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>2.3.0.RELEASE</spring-integration-kafka.version>
<spring-cloud-stream.version>1.3.1.BUILD-SNAPSHOT</spring-cloud-stream.version>
<spring-cloud-build.version>1.3.5.RELEASE</spring-cloud-build.version>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.0.M3</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.M1</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>2.1.0.M3</spring-cloud-stream.version>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka11</module>
<module>spring-cloud-starter-stream-kafka11</module>
<module>spring-cloud-stream-binder-kafka11-docs</module>
<module>spring-cloud-stream-binder-kafka-0.11-test</module>
<module>spring-cloud-stream-binder-kafka11-core</module>
<module>spring-cloud-stream-binder-kstream11</module>
<module>spring-cloud-stream-binder-kafka</module>
<module>spring-cloud-starter-stream-kafka</module>
<module>spring-cloud-stream-binder-kafka-docs</module>
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
</modules>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-core</artifactId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11</artifactId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
@@ -44,14 +42,56 @@
<artifactId>spring-cloud-stream</artifactId>
<version>${spring-cloud-stream.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-codec</artifactId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
@@ -68,54 +108,25 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.apache.avro</groupId>
<artifactId>avro-compiler</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<scope>test</scope>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
@@ -129,18 +140,6 @@
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.17</version>
<dependencies>
<dependency>
<groupId>com.puppycrawl.tools</groupId>
<artifactId>checkstyle</artifactId>
<version>7.1</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
@@ -164,26 +163,15 @@
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build-tools</artifactId>
<version>${spring-cloud-build.version}</version>
<artifactId>spring-cloud-stream-tools</artifactId>
<version>${spring-cloud-stream.version}</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>checkstyle-validation</id>
<phase>validate</phase>
<configuration>
<configLocation>checkstyle.xml</configLocation>
<encoding>UTF-8</encoding>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
<includeTestSourceDirectory>true</includeTestSourceDirectory>
</configuration>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
<configuration>
<configLocation>checkstyle.xml</configLocation>
<headerLocation>checkstyle-header.txt</headerLocation>
<includeTestSourceDirectory>true</includeTestSourceDirectory>
</configuration>
</plugin>
</plugins>
</build>
@@ -195,7 +183,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<url>http://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -206,7 +194,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<url>http://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -214,7 +202,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<url>http://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -224,7 +212,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/libs-snapshot-local</url>
<url>http://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -235,7 +223,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/libs-milestone-local</url>
<url>http://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -243,7 +231,7 @@
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/libs-release-local</url>
<url>http://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-parent</artifactId>
<version>1.3.1.BUILD-SNAPSHOT</version>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka11</artifactId>
<description>Spring Cloud Starter Stream Kafka for the 0.11.x.x client</description>
<url>https://projects.spring.io/spring-cloud</url>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>
<url>http://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
<url>http://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
@@ -19,7 +19,7 @@
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11</artifactId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>

View File

@@ -1,118 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-parent</artifactId>
<version>1.3.1.BUILD-SNAPSHOT</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-0.11-test</artifactId>
<description>Spring Cloud Stream Kafka Binder 0.11 Tests</description>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
<kafka.version>0.11.0.0</kafka.version>
<spring-kafka.version>1.3.0.RELEASE</spring-kafka.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>3.2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>3.2.2</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>confluent</id>
<url>https://packages.confluent.io/maven/</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@@ -1,298 +0,0 @@
/*
* Copyright 2014-2017 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig;
import io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
import org.I0Itec.zkclient.ZkClient;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.assertj.core.api.Assertions;
import org.eclipse.jetty.server.Server;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderHeaders;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.Spy;
import org.springframework.cloud.stream.binder.kafka.admin.Kafka10AdminUtilsOperation;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import static org.junit.Assert.assertTrue;
/**
* Integration tests for the {@link KafkaMessageChannelBinder}.
*
* This test specifically tests for the 0.11.x.x version of Kafka.
*
* @author Eric Bottard
* @author Marius Bogoevici
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
* @author Gary Russell
*/
public class Kafka_0_11_BinderTests extends KafkaBinderTests {
private final String CLASS_UNDER_TEST_NAME = KafkaMessageChannelBinder.class.getSimpleName();
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
private Kafka10TestBinder binder;
private final Kafka10AdminUtilsOperation adminUtilsOperation = new Kafka10AdminUtilsOperation();
@Override
protected void binderBindUnbindLatency() throws InterruptedException {
Thread.sleep(500);
}
@Override
protected Kafka10TestBinder getBinder() {
if (binder == null) {
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
binderConfiguration.setHeaders("dlqTestHeader");
binder = new Kafka10TestBinder(binderConfiguration);
}
return binder;
}
@Override
protected KafkaBinderConfigurationProperties createConfigurationProperties() {
KafkaBinderConfigurationProperties binderConfiguration = new KafkaBinderConfigurationProperties();
BrokerAddress[] brokerAddresses = embeddedKafka.getBrokerAddresses();
List<String> bAddresses = new ArrayList<>();
for (BrokerAddress bAddress : brokerAddresses) {
bAddresses.add(bAddress.toString());
}
String[] foo = new String[bAddresses.size()];
binderConfiguration.setBrokers(bAddresses.toArray(foo));
binderConfiguration.setZkNodes(embeddedKafka.getZookeeperConnectionString());
return binderConfiguration;
}
@Override
protected int partitionSize(String topic) {
return consumerFactory().createConsumer().partitionsFor(topic).size();
}
@Override
protected ZkUtils getZkUtils(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
final ZkClient zkClient = new ZkClient(kafkaBinderConfigurationProperties.getZkConnectionString(),
kafkaBinderConfigurationProperties.getZkSessionTimeout(), kafkaBinderConfigurationProperties.getZkConnectionTimeout(),
ZKStringSerializer$.MODULE$);
return new ZkUtils(zkClient, null, false);
}
@Override
protected void invokeCreateTopic(ZkUtils zkUtils, String topic, int partitions, int replicationFactor, Properties topicConfig) {
adminUtilsOperation.invokeCreateTopic(zkUtils, topic, partitions, replicationFactor, new Properties());
}
@Override
protected int invokePartitionSize(String topic, ZkUtils zkUtils) {
return adminUtilsOperation.partitionSize(topic, zkUtils);
}
@Override
public String getKafkaOffsetHeaderKey() {
return KafkaHeaders.OFFSET;
}
@Override
protected Binder getBinder(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties) {
return new Kafka10TestBinder(kafkaBinderConfigurationProperties);
}
@Before
public void init() {
String multiplier = System.getenv("KAFKA_TIMEOUT_MULTIPLIER");
if (multiplier != null) {
timeoutMultiplier = Double.parseDouble(multiplier);
}
}
@Override
protected boolean usesExplicitRouting() {
return false;
}
@Override
protected String getClassUnderTestName() {
return CLASS_UNDER_TEST_NAME;
}
@Override
public Spy spyOn(final String name) {
throw new UnsupportedOperationException("'spyOn' is not used by Kafka tests");
}
private ConsumerFactory<byte[], byte[]> consumerFactory() {
Map<String, Object> props = new HashMap<>();
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "TEST-CONSUMER-GROUP");
Deserializer<byte[]> valueDecoder = new ByteArrayDeserializer();
Deserializer<byte[]> keyDecoder = new ByteArrayDeserializer();
return new DefaultKafkaConsumerFactory<>(props, keyDecoder, valueDecoder);
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
public void testTrustedPackages() throws Exception {
Binder binder = getBinder();
BindingProperties producerBindingProperties = createProducerBindingProperties(createProducerProperties());
DirectChannel moduleOutputChannel = createBindableChannel("output", producerBindingProperties);
QueueChannel moduleInputChannel = new QueueChannel();
Binding<MessageChannel> producerBinding = binder.bindProducer("bar.0", moduleOutputChannel,
producerBindingProperties.getProducer());
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.getExtension().setTrustedPackages(new String[]{"org.springframework.util"});
Binding<MessageChannel> consumerBinding = binder.bindConsumer("bar.0",
"testSendAndReceiveNoOriginalContentType", moduleInputChannel, consumerProperties);
binderBindUnbindLatency();
Message<?> message = org.springframework.integration.support.MessageBuilder.withPayload("foo")
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.TEXT_PLAIN).setHeader("foo", MimeTypeUtils.TEXT_PLAIN)
.build();
moduleOutputChannel.send(message);
Message<?> inbound = receive(moduleInputChannel);
Assertions.assertThat(inbound).isNotNull();
Assertions.assertThat(inbound.getPayload()).isEqualTo("foo");
Assertions.assertThat(inbound.getHeaders().get(BinderHeaders.BINDER_ORIGINAL_CONTENT_TYPE)).isNull();
Assertions.assertThat(inbound.getHeaders().get(MessageHeaders.CONTENT_TYPE))
.isEqualTo(MimeTypeUtils.TEXT_PLAIN_VALUE);
Assertions.assertThat(inbound.getHeaders().get("foo")).isInstanceOf(MimeType.class);
MimeType actual = (MimeType) inbound.getHeaders().get("foo");
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN);
producerBinding.unbind();
consumerBinding.unbind();
}
class Foo{}
@Test
@SuppressWarnings("unchecked")
public void testCustomAvroSerialization() throws Exception {
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
final ZkClient zkClient = new ZkClient(configurationProperties.getZkConnectionString(),
configurationProperties.getZkSessionTimeout(), configurationProperties.getZkConnectionTimeout(),
ZKStringSerializer$.MODULE$);
final ZkUtils zkUtils = new ZkUtils(zkClient, null, false);
Map<String, Object> schemaRegistryProps = new HashMap<>();
schemaRegistryProps.put("kafkastore.connection.url", configurationProperties.getZkConnectionString());
schemaRegistryProps.put("listeners", "https://0.0.0.0:8082");
schemaRegistryProps.put("port", "8082");
schemaRegistryProps.put("kafkastore.topic", "_schemas");
SchemaRegistryConfig config = new SchemaRegistryConfig(schemaRegistryProps);
SchemaRegistryRestApplication app = new SchemaRegistryRestApplication(config);
Server server = app.createServer();
server.start();
long endTime = System.currentTimeMillis() + 5000;
while(true) {
if (server.isRunning()) {
break;
}
else if (System.currentTimeMillis() > endTime) {
Assertions.fail("Kafka Schema Registry Server failed to start");
}
}
User1 firstOutboundFoo = new User1();
String userName1 = "foo-name" + UUID.randomUUID().toString();
String favColor1 = "foo-color" + UUID.randomUUID().toString();
firstOutboundFoo.setName(userName1);
firstOutboundFoo.setFavoriteColor(favColor1);
Message<?> message = MessageBuilder.withPayload(firstOutboundFoo).build();
SubscribableChannel moduleOutputChannel = new DirectChannel();
String testTopicName = "existing" + System.currentTimeMillis();
invokeCreateTopic(zkUtils, testTopicName, 6, 1, new Properties());
configurationProperties.setAutoAddPartitions(true);
Binder binder = getBinder(configurationProperties);
QueueChannel moduleInputChannel = new QueueChannel();
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.getExtension().getConfiguration().put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
producerProperties.getExtension().getConfiguration().put("schema.registry.url", "http://localhost:8082");
producerProperties.setUseNativeEncoding(true);
Binding<MessageChannel> producerBinding = binder.bindProducer(testTopicName, moduleOutputChannel, producerProperties);
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
consumerProperties.getExtension().getConfiguration().put("value.deserializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer");
consumerProperties.getExtension().getConfiguration().put("schema.registry.url", "http://localhost:8082");
Binding<MessageChannel> consumerBinding = binder.bindConsumer(testTopicName, "test", moduleInputChannel, consumerProperties);
// Let the consumer actually bind to the producer before sending a msg
binderBindUnbindLatency();
moduleOutputChannel.send(message);
Message<?> inbound = receive(moduleInputChannel);
Assertions.assertThat(inbound).isNotNull();
assertTrue(message.getPayload() instanceof User1);
User1 receivedUser = (User1) message.getPayload();
Assertions.assertThat(receivedUser.getName()).isEqualTo(userName1);
Assertions.assertThat(receivedUser.getFavoriteColor()).isEqualTo(favColor1);
producerBinding.unbind();
consumerBinding.unbind();
}
@Override
public void testSendAndReceiveWithExplicitConsumerGroupWithRawMode() {
// raw mode no longer needed
}
@Override
public void testSendAndReceiveWithRawModeAndStringPayload() {
// raw mode no longer needed
}
}

View File

@@ -1,85 +0,0 @@
/*
* Copyright 2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.io.IOException;
import org.apache.avro.Schema;
import org.apache.avro.reflect.Nullable;
import org.apache.avro.specific.SpecificRecordBase;
import org.springframework.core.io.ClassPathResource;
/**
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
*/
public class User1 extends SpecificRecordBase {
@Nullable
private String name;
@Nullable
private String favoriteColor;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public String getFavoriteColor() {
return this.favoriteColor;
}
public void setFavoriteColor(String favoriteColor) {
this.favoriteColor = favoriteColor;
}
@Override
public Schema getSchema() {
try {
return new Schema.Parser().parse(new ClassPathResource("schemas/users_v1.schema").getInputStream());
}
catch (IOException e) {
throw new IllegalStateException(e);
}
}
@Override
public Object get(int i) {
if (i == 0) {
return getName().toString();
}
if (i == 1) {
return getFavoriteColor().toString();
}
return null;
}
@Override
public void put(int i, Object o) {
if (i == 0) {
setName((String) o);
}
if (i == 1) {
setFavoriteColor((String) o);
}
}
}

View File

@@ -0,0 +1,5 @@
eclipse.preferences.version=1
org.eclipse.jdt.ui.ignorelowercasenames=true
org.eclipse.jdt.ui.importorder=java;javax;com;org;org.springframework;ch.qos;\#;
org.eclipse.jdt.ui.ondemandthreshold=99
org.eclipse.jdt.ui.staticondemandthreshold=99

View File

@@ -0,0 +1,49 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>
<url>http://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
</organization>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import javax.security.auth.login.AppConfigurationEntry;
import org.springframework.util.Assert;

View File

@@ -0,0 +1,62 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Properties for configuring topics.
*
* @author Gary Russell
* @since 2.0
*
*/
public class KafkaAdminProperties {
private Short replicationFactor;
private Map<Integer, List<Integer>> replicasAssignments = new HashMap<>();
private Map<String, String> configuration = new HashMap<>();
public Short getReplicationFactor() {
return this.replicationFactor;
}
public void setReplicationFactor(Short replicationFactor) {
this.replicationFactor = replicationFactor;
}
public Map<Integer, List<Integer>> getReplicasAssignments() {
return this.replicasAssignments;
}
public void setReplicasAssignments(Map<Integer, List<Integer>> replicasAssignments) {
this.replicasAssignments = replicasAssignments;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
}

View File

@@ -0,0 +1,783 @@
/*
* Copyright 2015-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.beans.BeansException;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
import org.springframework.cloud.stream.config.MergableProperties;
import org.springframework.expression.Expression;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author David Turanski
* @author Ilayaperumal Gopinathan
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
* @author Rafal Zukowski
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
private static final String DEFAULT_KAFKA_CONNECTION_STRING = "localhost:9092";
private final Transaction transaction = new Transaction();
private final KafkaProperties kafkaProperties;
private String[] zkNodes = new String[] { "localhost" };
/**
* Arbitrary kafka properties that apply to both producers and consumers.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Arbitrary kafka consumer properties.
*/
private Map<String, String> consumerProperties = new HashMap<>();
/**
* Arbitrary kafka producer properties.
*/
private Map<String, String> producerProperties = new HashMap<>();
private String defaultZkPort = "2181";
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
private String[] headers = new String[] {};
private int offsetUpdateTimeWindow = 10000;
private int offsetUpdateCount;
private int offsetUpdateShutdownTimeout = 2000;
private int maxWait = 100;
private boolean autoCreateTopics = true;
private boolean autoAddPartitions;
private int socketBufferSize = 2097152;
/**
* ZK session timeout in milliseconds.
*/
private int zkSessionTimeout = 10000;
/**
* ZK Connection timeout in milliseconds.
*/
private int zkConnectionTimeout = 10000;
private String requiredAcks = "1";
private short replicationFactor = 1;
private int fetchSize = 1024 * 1024;
private int minPartitionCount = 1;
private int queueSize = 8192;
/**
* Time to wait to get partition information in seconds; default 60.
*/
private int healthTimeout = 60;
private JaasLoginModuleConfiguration jaas;
/**
* The bean name of a custom header mapper to use instead of a {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
*/
private String headerMapperBeanName;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
this.kafkaProperties = kafkaProperties;
}
public KafkaProperties getKafkaProperties() {
return kafkaProperties;
}
public Transaction getTransaction() {
return this.transaction;
}
/**
* No longer used.
* @return the connection String
* @deprecated connection to zookeeper is no longer necessary
*/
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
@Deprecated
public String getZkConnectionString() {
return toConnectionString(this.zkNodes, this.defaultZkPort);
}
public String getKafkaConnectionString() {
return toConnectionString(this.brokers, this.defaultBrokerPort);
}
public String getDefaultKafkaConnectionString() {
return DEFAULT_KAFKA_CONNECTION_STRING;
}
public String[] getHeaders() {
return this.headers;
}
/**
* No longer used.
* @return the window.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateTimeWindow() {
return this.offsetUpdateTimeWindow;
}
/**
* No longer used.
* @return the count.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateCount() {
return this.offsetUpdateCount;
}
/**
* No longer used.
* @return the timeout.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getOffsetUpdateShutdownTimeout() {
return this.offsetUpdateShutdownTimeout;
}
/**
* Zookeeper nodes.
* @return the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public String[] getZkNodes() {
return this.zkNodes;
}
/**
* Zookeeper nodes.
* @param zkNodes the nodes.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}
/**
* Zookeeper port.
* @param defaultZkPort the port.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setDefaultZkPort(String defaultZkPort) {
this.defaultZkPort = defaultZkPort;
}
public String[] getBrokers() {
return this.brokers;
}
public void setBrokers(String... brokers) {
this.brokers = brokers;
}
public void setDefaultBrokerPort(String defaultBrokerPort) {
this.defaultBrokerPort = defaultBrokerPort;
}
public void setHeaders(String... headers) {
this.headers = headers;
}
/**
* No longer used.
* @param offsetUpdateTimeWindow the window.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateTimeWindow(int offsetUpdateTimeWindow) {
this.offsetUpdateTimeWindow = offsetUpdateTimeWindow;
}
/**
* No longer used.
* @param offsetUpdateCount the count.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateCount(int offsetUpdateCount) {
this.offsetUpdateCount = offsetUpdateCount;
}
/**
* No longer used.
* @param offsetUpdateShutdownTimeout the timeout.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setOffsetUpdateShutdownTimeout(int offsetUpdateShutdownTimeout) {
this.offsetUpdateShutdownTimeout = offsetUpdateShutdownTimeout;
}
/**
* Zookeeper session timeout.
* @return the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkSessionTimeout() {
return this.zkSessionTimeout;
}
/**
* Zookeeper session timeout.
* @param zkSessionTimeout the timout
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkSessionTimeout(int zkSessionTimeout) {
this.zkSessionTimeout = zkSessionTimeout;
}
/**
* Zookeeper connection timeout.
* @return the timout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public int getZkConnectionTimeout() {
return this.zkConnectionTimeout;
}
/**
* Zookeeper connection timeout.
* @param zkConnectionTimeout the timeout.
* @deprecated connection to zookeeper is no longer necessary
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkConnectionTimeout(int zkConnectionTimeout) {
this.zkConnectionTimeout = zkConnectionTimeout;
}
/**
* Converts an array of host values to a comma-separated String.
*
* It will append the default port value, if not already specified.
*/
private String toConnectionString(String[] hosts, String defaultPort) {
String[] fullyFormattedHosts = new String[hosts.length];
for (int i = 0; i < hosts.length; i++) {
if (hosts[i].contains(":") || StringUtils.isEmpty(defaultPort)) {
fullyFormattedHosts[i] = hosts[i];
}
else {
fullyFormattedHosts[i] = hosts[i] + ":" + defaultPort;
}
}
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
}
/**
* No longer used.
* @return the wait.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getMaxWait() {
return this.maxWait;
}
/**
* No longer user.
* @param maxWait the wait.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setMaxWait(int maxWait) {
this.maxWait = maxWait;
}
public String getRequiredAcks() {
return this.requiredAcks;
}
public void setRequiredAcks(int requiredAcks) {
this.requiredAcks = String.valueOf(requiredAcks);
}
public void setRequiredAcks(String requiredAcks) {
this.requiredAcks = requiredAcks;
}
public short getReplicationFactor() {
return this.replicationFactor;
}
public void setReplicationFactor(short replicationFactor) {
this.replicationFactor = replicationFactor;
}
/**
* No longer used.
* @return the size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getFetchSize() {
return this.fetchSize;
}
/**
* No longer used.
* @param fetchSize the size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setFetchSize(int fetchSize) {
this.fetchSize = fetchSize;
}
public int getMinPartitionCount() {
return this.minPartitionCount;
}
public void setMinPartitionCount(int minPartitionCount) {
this.minPartitionCount = minPartitionCount;
}
public int getHealthTimeout() {
return this.healthTimeout;
}
public void setHealthTimeout(int healthTimeout) {
this.healthTimeout = healthTimeout;
}
/**
* No longer used.
* @return the queue size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public int getQueueSize() {
return this.queueSize;
}
/**
* No longer used.
* @param queueSize the queue size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0")
public void setQueueSize(int queueSize) {
this.queueSize = queueSize;
}
public boolean isAutoCreateTopics() {
return this.autoCreateTopics;
}
public void setAutoCreateTopics(boolean autoCreateTopics) {
this.autoCreateTopics = autoCreateTopics;
}
public boolean isAutoAddPartitions() {
return this.autoAddPartitions;
}
public void setAutoAddPartitions(boolean autoAddPartitions) {
this.autoAddPartitions = autoAddPartitions;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @return the size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public int getSocketBufferSize() {
return this.socketBufferSize;
}
/**
* No longer used; set properties such as this via {@link #getConfiguration()
* configuration}.
* @param socketBufferSize the size.
* @deprecated
*/
@Deprecated
@DeprecatedConfigurationProperty(reason = "Not used since 2.0, set properties such as this via 'configuration'")
public void setSocketBufferSize(int socketBufferSize) {
this.socketBufferSize = socketBufferSize;
}
public Map<String, String> getConfiguration() {
return configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
public Map<String, String> getConsumerProperties() {
return this.consumerProperties;
}
public void setConsumerProperties(Map<String, String> consumerProperties) {
Assert.notNull(consumerProperties, "'consumerProperties' cannot be null");
this.consumerProperties = consumerProperties;
}
public Map<String, String> getProducerProperties() {
return this.producerProperties;
}
public void setProducerProperties(Map<String, String> producerProperties) {
Assert.notNull(producerProperties, "'producerProperties' cannot be null");
this.producerProperties = producerProperties;
}
/**
* Merge boot consumer properties, general properties from
* {@link #setConfiguration(Map)} that apply to consumers, properties from
* {@link #setConsumerProperties(Map)}, in that order.
* @return the merged properties.
*/
public Map<String, Object> mergedConsumerConfiguration() {
Map<String, Object> consumerConfiguration = new HashMap<>();
consumerConfiguration.putAll(this.kafkaProperties.buildConsumerProperties());
// Copy configured binder properties that apply to consumers
for (Map.Entry<String, String> configurationEntry : this.configuration.entrySet()) {
if (ConsumerConfig.configNames().contains(configurationEntry.getKey())) {
consumerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
}
}
consumerConfiguration.putAll(this.consumerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(consumerConfiguration, ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
/**
* Merge boot producer properties, general properties from
* {@link #setConfiguration(Map)} that apply to producers, properties from
* {@link #setProducerProperties(Map)}, in that order.
* @return the merged properties.
*/
public Map<String, Object> mergedProducerConfiguration() {
Map<String, Object> producerConfiguration = new HashMap<>();
producerConfiguration.putAll(this.kafkaProperties.buildProducerProperties());
// Copy configured binder properties that apply to producers
for (Map.Entry<String, String> configurationEntry : configuration.entrySet()) {
if (ProducerConfig.configNames().contains(configurationEntry.getKey())) {
producerConfiguration.put(configurationEntry.getKey(), configurationEntry.getValue());
}
}
producerConfiguration.putAll(this.producerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(producerConfiguration, ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private Map<String, Object> getConfigurationWithBootstrapServer(Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = configuration.get(bootstrapServersConfig);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1 && bootStrapServers.get(0).equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(configuration);
}
public JaasLoginModuleConfiguration getJaas() {
return this.jaas;
}
public void setJaas(JaasLoginModuleConfiguration jaas) {
this.jaas = jaas;
}
public String getHeaderMapperBeanName() {
return this.headerMapperBeanName;
}
public void setHeaderMapperBeanName(String headerMapperBeanName) {
this.headerMapperBeanName = headerMapperBeanName;
}
public static class Transaction {
private final CombinedProducerProperties producer = new CombinedProducerProperties();
private String transactionIdPrefix;
public String getTransactionIdPrefix() {
return this.transactionIdPrefix;
}
public void setTransactionIdPrefix(String transactionIdPrefix) {
this.transactionIdPrefix = transactionIdPrefix;
}
public CombinedProducerProperties getProducer() {
return this.producer;
}
}
/**
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties}
* so that common and kafka-specific properties can be set for the transactional
* producer.
* @since 2.1
*/
public static class CombinedProducerProperties {
private final ProducerProperties producerProperties = new ProducerProperties();
private final KafkaProducerProperties kafkaProducerProperties = new KafkaProducerProperties();
public void merge(MergableProperties mergable) {
this.producerProperties.merge(mergable);
}
public Expression getPartitionKeyExpression() {
return this.producerProperties.getPartitionKeyExpression();
}
public void setPartitionKeyExpression(Expression partitionKeyExpression) {
this.producerProperties.setPartitionKeyExpression(partitionKeyExpression);
}
public boolean isPartitioned() {
return this.producerProperties.isPartitioned();
}
public void copyProperties(Object source, Object target) throws BeansException {
this.producerProperties.copyProperties(source, target);
}
public Expression getPartitionSelectorExpression() {
return this.producerProperties.getPartitionSelectorExpression();
}
public void setPartitionSelectorExpression(Expression partitionSelectorExpression) {
this.producerProperties.setPartitionSelectorExpression(partitionSelectorExpression);
}
public @Min(value = 1, message = "Partition count should be greater than zero.") int getPartitionCount() {
return this.producerProperties.getPartitionCount();
}
public void setPartitionCount(int partitionCount) {
this.producerProperties.setPartitionCount(partitionCount);
}
public String[] getRequiredGroups() {
return this.producerProperties.getRequiredGroups();
}
public void setRequiredGroups(String... requiredGroups) {
this.producerProperties.setRequiredGroups(requiredGroups);
}
public @AssertTrue(message = "Partition key expression and partition key extractor class properties are mutually exclusive.") boolean isValidPartitionKeyProperty() {
return this.producerProperties.isValidPartitionKeyProperty();
}
public @AssertTrue(message = "Partition selector class and partition selector expression properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
return this.producerProperties.isValidPartitionSelectorProperty();
}
public HeaderMode getHeaderMode() {
return this.producerProperties.getHeaderMode();
}
public void setHeaderMode(HeaderMode headerMode) {
this.producerProperties.setHeaderMode(headerMode);
}
public boolean isUseNativeEncoding() {
return this.producerProperties.isUseNativeEncoding();
}
public void setUseNativeEncoding(boolean useNativeEncoding) {
this.producerProperties.setUseNativeEncoding(useNativeEncoding);
}
public boolean isErrorChannelEnabled() {
return this.producerProperties.isErrorChannelEnabled();
}
public void setErrorChannelEnabled(boolean errorChannelEnabled) {
this.producerProperties.setErrorChannelEnabled(errorChannelEnabled);
}
public String getPartitionKeyExtractorName() {
return this.producerProperties.getPartitionKeyExtractorName();
}
public void setPartitionKeyExtractorName(String partitionKeyExtractorName) {
this.producerProperties.setPartitionKeyExtractorName(partitionKeyExtractorName);
}
public String getPartitionSelectorName() {
return this.producerProperties.getPartitionSelectorName();
}
public void setPartitionSelectorName(String partitionSelectorName) {
this.producerProperties.setPartitionSelectorName(partitionSelectorName);
}
public int getBufferSize() {
return this.kafkaProducerProperties.getBufferSize();
}
public void setBufferSize(int bufferSize) {
this.kafkaProducerProperties.setBufferSize(bufferSize);
}
public @NotNull CompressionType getCompressionType() {
return this.kafkaProducerProperties.getCompressionType();
}
public void setCompressionType(CompressionType compressionType) {
this.kafkaProducerProperties.setCompressionType(compressionType);
}
public boolean isSync() {
return this.kafkaProducerProperties.isSync();
}
public void setSync(boolean sync) {
this.kafkaProducerProperties.setSync(sync);
}
public int getBatchTimeout() {
return this.kafkaProducerProperties.getBatchTimeout();
}
public void setBatchTimeout(int batchTimeout) {
this.kafkaProducerProperties.setBatchTimeout(batchTimeout);
}
public Expression getMessageKeyExpression() {
return this.kafkaProducerProperties.getMessageKeyExpression();
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
this.kafkaProducerProperties.setMessageKeyExpression(messageKeyExpression);
}
public String[] getHeaderPatterns() {
return this.kafkaProducerProperties.getHeaderPatterns();
}
public void setHeaderPatterns(String[] headerPatterns) {
this.kafkaProducerProperties.setHeaderPatterns(headerPatterns);
}
public Map<String, String> getConfiguration() {
return this.kafkaProducerProperties.getConfiguration();
}
public void setConfiguration(Map<String, String> configuration) {
this.kafkaProducerProperties.setConfiguration(configuration);
}
public KafkaAdminProperties getAdmin() {
return this.kafkaProducerProperties.getAdmin();
}
public void setAdmin(KafkaAdminProperties admin) {
this.kafkaProducerProperties.setAdmin(admin);
}
public KafkaProducerProperties getExtension() {
return this.kafkaProducerProperties;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,10 +16,13 @@
package org.springframework.cloud.stream.binder.kafka.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
public class KafkaBindingProperties {
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider{
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016-2017 the original author or authors.
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -19,6 +19,8 @@ package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.cloud.stream.config.MergableProperties;
/**
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
@@ -29,7 +31,7 @@ import java.util.Map;
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
* </p>
*/
public class KafkaConsumerProperties {
public class KafkaConsumerProperties implements MergableProperties {
public enum StartOffset {
earliest(-2L),
@@ -53,6 +55,8 @@ public class KafkaConsumerProperties {
both
}
private boolean ackEachRecord;
private boolean autoRebalanceEnabled = true;
private boolean autoCommitOffset = true;
@@ -61,10 +65,14 @@ public class KafkaConsumerProperties {
private StartOffset startOffset;
private boolean resetOffsets;
private boolean enableDlq;
private String dlqName;
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
private int recoveryInterval = 5000;
private String[] trustedPackages;
@@ -73,8 +81,22 @@ public class KafkaConsumerProperties {
private String converterBeanName;
private long idleEventInterval = 30_000;
private boolean destinationIsPattern;
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
public boolean isAckEachRecord() {
return this.ackEachRecord;
}
public void setAckEachRecord(boolean ackEachRecord) {
this.ackEachRecord = ackEachRecord;
}
public boolean isAutoCommitOffset() {
return this.autoCommitOffset;
}
@@ -91,6 +113,14 @@ public class KafkaConsumerProperties {
this.startOffset = startOffset;
}
public boolean isResetOffsets() {
return this.resetOffsets;
}
public void setResetOffsets(boolean resetOffsets) {
this.resetOffsets = resetOffsets;
}
public boolean isEnableDlq() {
return this.enableDlq;
}
@@ -107,10 +137,22 @@ public class KafkaConsumerProperties {
this.autoCommitOnError = autoCommitOnError;
}
/**
* No longer used.
* @return the interval.
* @deprecated
*/
@Deprecated
public int getRecoveryInterval() {
return this.recoveryInterval;
}
/**
* No longer used.
* @param recoveryInterval the interval.
* @deprecated
*/
@Deprecated
public void setRecoveryInterval(int recoveryInterval) {
this.recoveryInterval = recoveryInterval;
}
@@ -147,6 +189,13 @@ public class KafkaConsumerProperties {
this.trustedPackages = trustedPackages;
}
public KafkaProducerProperties getDlqProducerProperties() {
return dlqProducerProperties;
}
public void setDlqProducerProperties(KafkaProducerProperties dlqProducerProperties) {
this.dlqProducerProperties = dlqProducerProperties;
}
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
@@ -163,4 +212,28 @@ public class KafkaConsumerProperties {
this.converterBeanName = converterBeanName;
}
public long getIdleEventInterval() {
return this.idleEventInterval;
}
public void setIdleEventInterval(long idleEventInterval) {
this.idleEventInterval = idleEventInterval;
}
public boolean isDestinationIsPattern() {
return this.destinationIsPattern;
}
public void setDestinationIsPattern(boolean destinationIsPattern) {
this.destinationIsPattern = destinationIsPattern;
}
public KafkaAdminProperties getAdmin() {
return this.admin;
}
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
}
}

View File

@@ -0,0 +1,99 @@
/*
* Copyright 2016 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.ExtendedBindingProperties;
/**
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka")
public class KafkaExtendedBindingProperties
implements ExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.default";
private Map<String, KafkaBindingProperties> bindings = new HashMap<>();
public Map<String, KafkaBindingProperties> getBindings() {
return this.bindings;
}
public void setBindings(Map<String, KafkaBindingProperties> bindings) {
this.bindings = bindings;
}
@Override
public synchronized KafkaConsumerProperties getExtendedConsumerProperties(String channelName) {
if (bindings.containsKey(channelName)) {
if (bindings.get(channelName).getConsumer() != null) {
return bindings.get(channelName).getConsumer();
}
else {
KafkaConsumerProperties properties = new KafkaConsumerProperties();
this.bindings.get(channelName).setConsumer(properties);
return properties;
}
}
else {
KafkaConsumerProperties properties = new KafkaConsumerProperties();
KafkaBindingProperties rbp = new KafkaBindingProperties();
rbp.setConsumer(properties);
bindings.put(channelName, rbp);
return properties;
}
}
@Override
public synchronized KafkaProducerProperties getExtendedProducerProperties(String channelName) {
if (bindings.containsKey(channelName)) {
if (bindings.get(channelName).getProducer() != null) {
return bindings.get(channelName).getProducer();
}
else {
KafkaProducerProperties properties = new KafkaProducerProperties();
this.bindings.get(channelName).setProducer(properties);
return properties;
}
}
else {
KafkaProducerProperties properties = new KafkaProducerProperties();
KafkaBindingProperties rbp = new KafkaBindingProperties();
rbp.setProducer(properties);
bindings.put(channelName, rbp);
return properties;
}
}
@Override
public String getDefaultsPrefix() {
return DEFAULTS_PREFIX;
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaBindingProperties.class;
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -21,6 +21,7 @@ import java.util.Map;
import javax.validation.constraints.NotNull;
import org.springframework.cloud.stream.config.MergableProperties;
import org.springframework.expression.Expression;
/**
@@ -28,7 +29,7 @@ import org.springframework.expression.Expression;
* @author Henryk Konsek
* @author Gary Russell
*/
public class KafkaProducerProperties {
public class KafkaProducerProperties implements MergableProperties {
private int bufferSize = 16384;
@@ -44,6 +45,8 @@ public class KafkaProducerProperties {
private Map<String, String> configuration = new HashMap<>();
private KafkaAdminProperties admin = new KafkaAdminProperties();
public int getBufferSize() {
return this.bufferSize;
}
@@ -101,6 +104,15 @@ public class KafkaProducerProperties {
this.configuration = configuration;
}
public KafkaAdminProperties getAdmin() {
return this.admin;
}
public void setAdmin(KafkaAdminProperties admin) {
this.admin = admin;
}
public enum CompressionType {
none,
gzip,

View File

@@ -0,0 +1,484 @@
/*
* Copyright 2014-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.CreatePartitionsResult;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.DescribeTopicsResult;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.clients.admin.NewPartitions;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.PartitionInfo;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaAdminProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.cloud.stream.provisioning.ProvisioningException;
import org.springframework.cloud.stream.provisioning.ProvisioningProvider;
import org.springframework.retry.RetryOperations;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Kafka implementation for {@link ProvisioningProvider}
*
* @author Soby Chacko
* @author Gary Russell
* @author Ilayaperumal Gopinathan
* @author Simon Flandergan
* @author Oleg Zhurakousky
*/
public class KafkaTopicProvisioner implements ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>,
ExtendedProducerProperties<KafkaProducerProperties>>, InitializingBean {
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private final Log logger = LogFactory.getLog(getClass());
private final KafkaBinderConfigurationProperties configurationProperties;
private final int operationTimeout = DEFAULT_OPERATION_TIMEOUT;
private final Map<String, Object> adminClientProperties;
private RetryOperations metadataRetryOperations;
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
normalalizeBootPropsWithBinder(adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
}
/**
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
@Override
public void afterPropertiesSet() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100);
backOffPolicy.setMultiplier(2);
backOffPolicy.setMaxInterval(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
@Override
public ProducerDestination provisionProducerDestination(final String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (this.logger.isInfoEnabled()) {
this.logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
createTopic(adminClient, name, properties.getPartitionCount(), false, properties.getExtension().getAdmin());
int partitions = 0;
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
Map<String, TopicDescription> topicDescriptions = null;
try {
topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception e) {
throw new ProvisioningException("Problems encountered with partitions finding", e);
}
TopicDescription topicDescription = topicDescriptions.get(name);
partitions = topicDescription.partitions().size();
}
return new KafkaProducerDestination(name, partitions);
}
}
@Override
public ConsumerDestination provisionConsumerDestination(final String name, final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (!properties.isMultiplex()) {
return doProvisionConsumerDestination(name, group, properties);
}
else {
String[] destinations = StringUtils.commaDelimitedListToStringArray(name);
for (String destination : destinations) {
doProvisionConsumerDestination(destination.trim(), group, properties);
}
return new KafkaConsumerDestination(name);
}
}
private ConsumerDestination doProvisionConsumerDestination(final String name, final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (properties.getExtension().isDestinationIsPattern()) {
Assert.isTrue(!properties.getExtension().isEnableDlq(),
"enableDLQ is not allowed when listening to topic patterns");
if (this.logger.isDebugEnabled()) {
this.logger.debug("Listening to a topic pattern - " + name
+ " - no provisioning performed");
}
return new KafkaConsumerDestination(name);
}
KafkaTopicUtils.validateTopicName(name);
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !properties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
if (properties.getInstanceCount() == 0) {
throw new IllegalArgumentException("Instance count cannot be zero");
}
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
ConsumerDestination consumerDestination = new KafkaConsumerDestination(name);
try (AdminClient adminClient = createAdminClient()) {
createTopic(adminClient, name, partitionCount, properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getAdmin());
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult.all();
try {
Map<String, TopicDescription> topicDescriptions = all.get(operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(name);
int partitions = topicDescription.partitions().size();
consumerDestination = createDlqIfNeedBe(adminClient, name, group, properties, anonymous, partitions);
if (consumerDestination == null) {
consumerDestination = new KafkaConsumerDestination(name, partitions);
}
}
catch (Exception e) {
throw new ProvisioningException("provisioning exception", e);
}
}
}
return consumerDestination;
}
AdminClient createAdminClient() {
return AdminClient.create(this.adminClientProperties);
}
/**
* In general, binder properties supersede boot kafka properties.
* The one exception is the bootstrap servers. In that case, we should only override
* the boot properties if (there is a binder property AND it is a non-default value)
* OR (if there is no boot property); this is needed because the binder property
* never returns a null value.
* @param adminProps the admin properties to normalize.
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
private void normalalizeBootPropsWithBinder(Map<String, Object> adminProps, KafkaProperties bootProps,
KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
if (ObjectUtils.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConnectionString);
}
// Now override any boot values with binder values
Map<String, String> binderProperties = binderProps.getConfiguration();
Set<String> adminConfigNames = AdminClientConfig.configNames();
binderProperties.forEach((key, value) -> {
if (key.equals(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)) {
throw new IllegalStateException(
"Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && this.logger.isDebugEnabled()) {
logger.debug("Overrode boot property: [" + key + "], from: [" + replaced + "] to: [" + value + "]");
}
}
});
}
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name, String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName()) ?
properties.getExtension().getDlqName() : "error." + name + "." + group;
try {
createTopicAndPartitions(adminClient, dlqTopic, partitions,
properties.getExtension().isAutoRebalanceEnabled(), properties.getExtension().getAdmin());
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
}
}
return new KafkaConsumerDestination(name, partitions, dlqTopic);
}
return null;
}
private void createTopic(AdminClient adminClient, String name, int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaAdminProperties properties) {
try {
createTopicIfNecessary(adminClient, name, partitionCount, tolerateLowerPartitionsOnBroker, properties);
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
}
}
}
private void createTopicIfNecessary(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties properties) throws Throwable {
if (this.configurationProperties.isAutoCreateTopics()) {
createTopicAndPartitions(adminClient, topicName, partitionCount, tolerateLowerPartitionsOnBroker,
properties);
}
else {
this.logger.info("Auto creation of topics is disabled.");
}
}
/**
* Creates a Kafka topic if needed, or try to increase its partition count to the
* desired number.
* @param adminClient
* @param adminProperties
*/
private void createTopicAndPartitions(AdminClient adminClient, final String topicName, final int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaAdminProperties adminProperties) throws Throwable {
ListTopicsResult listTopicsResult = adminClient.listTopics();
KafkaFuture<Set<String>> namesFutures = listTopicsResult.names();
Set<String> names = namesFutures.get(operationTimeout, TimeUnit.SECONDS);
if (names.contains(topicName)) {
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties.isAutoAddPartitions()
? Math.max(this.configurationProperties.getMinPartitionCount(), partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture.get(operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(topicName);
int partitionSize = topicDescription.partitions().size();
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
CreatePartitionsResult partitions = adminClient.createPartitions(
Collections.singletonMap(topicName, NewPartitions.increaseTo(effectivePartitionCount)));
partitions.all().get(operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (effectivePartitionCount - partitionSize) + " idle consumers");
}
else {
throw new ProvisioningException("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "Consider either increasing the partition count of the topic or enabling " +
"`autoAddPartitions`");
}
}
}
else {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(this.configurationProperties.getMinPartitionCount(),
partitionCount);
this.metadataRetryOperations.execute(context -> {
NewTopic newTopic;
Map<Integer, List<Integer>> replicasAssignments = adminProperties.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName, adminProperties.getReplicasAssignments());
}
else {
newTopic = new NewTopic(topicName, effectivePartitionCount,
adminProperties.getReplicationFactor() != null
? adminProperties.getReplicationFactor()
: configurationProperties.getReplicationFactor());
}
if (adminProperties.getConfiguration().size() > 0) {
newTopic.configs(adminProperties.getConfiguration());
}
CreateTopicsResult createTopicsResult = adminClient.createTopics(Collections.singletonList(newTopic));
try {
createTopicsResult.all().get(operationTimeout, TimeUnit.SECONDS);
}
catch (Exception e) {
if (e instanceof ExecutionException) {
String exceptionMessage = e.getMessage();
if (exceptionMessage.contains("org.apache.kafka.common.errors.TopicExistsException")) {
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName + ". Topic already exists.");
}
}
else {
logger.error("Failed to create topics", e.getCause());
throw e.getCause();
}
}
else {
logger.error("Failed to create topics", e.getCause());
throw e.getCause();
}
}
return null;
});
}
}
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable) {
try {
return this.metadataRetryOperations
.execute(context -> {
Collection<PartitionInfo> partitions = callable.call();
// do a sanity check on the partition set
int partitionSize = partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: " + partitionCount + ", but "
+ partitionSize + (partitionSize > 1 ? " have " : " has ") + "been found instead."
+ "There will be " + (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ") + "been found instead");
}
}
return partitions;
});
}
catch (Exception e) {
this.logger.error("Cannot initialize Binder", e);
throw new BinderException("Cannot initialize binder:", e);
}
}
private static final class KafkaProducerDestination implements ProducerDestination {
private final String producerDestinationName;
private final int partitions;
KafkaProducerDestination(String destinationName, Integer partitions) {
this.producerDestinationName = destinationName;
this.partitions = partitions;
}
@Override
public String getName() {
return producerDestinationName;
}
@Override
public String getNameForPartition(int partition) {
return producerDestinationName;
}
@Override
public String toString() {
return "KafkaProducerDestination{" +
"producerDestinationName='" + producerDestinationName + '\'' +
", partitions=" + partitions +
'}';
}
}
private static final class KafkaConsumerDestination implements ConsumerDestination {
private final String consumerDestinationName;
private final int partitions;
private final String dlqName;
KafkaConsumerDestination(String consumerDestinationName) {
this(consumerDestinationName, 0, null);
}
KafkaConsumerDestination(String consumerDestinationName, int partitions) {
this(consumerDestinationName, partitions, null);
}
KafkaConsumerDestination(String consumerDestinationName, Integer partitions, String dlqName) {
this.consumerDestinationName = consumerDestinationName;
this.partitions = partitions;
this.dlqName = dlqName;
}
@Override
public String getName() {
return this.consumerDestinationName;
}
@Override
public String toString() {
return "KafkaConsumerDestination{" +
"consumerDestinationName='" + consumerDestinationName + '\'' +
", partitions=" + partitions +
", dlqName='" + dlqName + '\'' +
'}';
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -0,0 +1,99 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.common.config.SslConfigs;
import org.apache.kafka.common.network.SslChannelBuilder;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.core.io.ClassPathResource;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
/**
* @author Gary Russell
* @since 2.0
*
*/
public class KafkaTopicProvisionerTests {
@SuppressWarnings("rawtypes")
@Test
public void bootPropertiesOverriddenExceptServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:1234"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:9092");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
adminClient.close();
}
@SuppressWarnings("rawtypes")
@Test
public void bootPropertiesOverriddenIncludingServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:9092"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:1234");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig, bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient, "client.selector.channelBuilder.configs", Map.class);
assertThat(((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0)).isEqualTo("localhost:1234");
adminClient.close();
}
@Test
public void brokersInvalid() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, "localhost:1234");
try {
new KafkaTopicProvisioner(binderConfig, bootConfig);
fail("Expected illegal state");
}
catch (IllegalStateException e) {
assertThat(e.getMessage())
.isEqualTo("Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
}
}

View File

@@ -1,16 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-parent</artifactId>
<version>1.3.1.BUILD-SNAPSHOT</version>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M3</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka11-docs</artifactId>
<name>spring-cloud-stream-binder-kafka11-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs for the 0.11.x.x client</description>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
<name>spring-cloud-stream-binder-kafka-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs</description>
<properties>
<main.basedir>${basedir}/..</main.basedir>
</properties>
@@ -71,8 +71,8 @@
<quiet>true</quiet>
<stylesheetfile>${basedir}/src/main/javadoc/spring-javadoc.css</stylesheetfile>
<links>
<link>https://docs.spring.io/spring-framework/docs/${spring.version}/javadoc-api/</link>
<link>https://docs.spring.io/spring-shell/docs/current/api/</link>
<link>http://docs.spring.io/spring-framework/docs/${spring.version}/javadoc-api/</link>
<link>http://docs.spring.io/spring-shell/docs/current/api/</link>
</links>
</configuration>
</execution>

View File

@@ -34,7 +34,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
https://compose.docker.io/[Docker Compose] to run the middeware servers
http://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -43,13 +43,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".

View File

@@ -24,7 +24,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -37,6 +37,6 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -1,24 +1,22 @@
[[kafka-dlq-processing]]
== Dead-Letter Topic Processing
Because it can't be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following `spring-boot` application is an example of how to route those messages back to the original topic, but moves them to a third "parking lot" topic after three attempts.
The application is simply another spring-cloud-stream application that reads from the dead-letter topic.
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.
The application is another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
The examples assume the original destination is `so8400out` and the consumer group is `so8400`.
There are several considerations.
There are a couple of strategies to consider:
- Consider only running the rerouting when the main application is not running.
Otherwise, the retries for transient errors will be used up very quickly.
- Alternatively, use a two-stage approach - use this application to route to a third topic, and another to route from there back to the main topic.
- Since this technique uses a message header to keep track of retries, it won't work with `headerMode=raw`.
In that case, consider adding some data to the payload (that can be ignored by the main application).
- `x-retries` has to be added to the `headers` property `spring.cloud.stream.kafka.binder.headers=x-retries` on both this, and the main application so that the header is transported between the applications.
- Since kafka is publish/subscribe, replayed messages will be sent to each consumer group, even those that successfully processed a message the first time around.
* Consider running the rerouting only when the main application is not running.
Otherwise, the retries for transient errors are used up very quickly.
* Alternatively, use a two-stage approach: Use this application to route to a third topic and another to route from there back to the main topic.
The following code listings show the sample application:
.application.properties
[source]

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -1,6 +1,6 @@
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell
:doctype: book
:toc:
:toclevels: 4
@@ -11,13 +11,13 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:spring-cloud-stream-binder-kafka-docs: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: https://raw.github.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:github-wiki: https://github.com/{github-repo}/wiki
:github-master-code: https://github.com/{github-repo}/tree/master
:github-raw: http://raw.github.com/{github-repo}/{github-tag}
:github-code: http://github.com/{github-repo}/tree/{github-tag}
:github-wiki: http://github.com/{github-repo}/wiki
:github-master-code: http://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
@@ -26,7 +26,7 @@ include::overview.adoc[]
include::dlq.adoc[]
include::metrics.adoc[]
include::partitions.adoc[]
= Appendices
[appendix]

View File

@@ -0,0 +1,685 @@
== Usage
For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following
Maven coordinates:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
----
== Kafka Streams Binder Overview
Spring Cloud Stream's Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka
Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the
https://kafka.apache.org/documentation/streams/developer-guide[Apache Kafka Streams] APIs in the core business logic.
Kafka Streams binder implementation builds on the foundation provided by the http://docs.spring.io/spring-kafka/reference/html/_reference.html#kafka-streams[Kafka Streams in Spring Kafka]
project.
Kafka Streams binder provides binding capabilities for the three major types in Kafka Streams - KStream, KTable and GlobalKTable.
As part of this native integration, the high-level https://docs.confluent.io/current/streams/developer-guide/dsl-api.html[Streams DSL]
provided by the Kafka Streams API is available for use in the business logic.
An early version of the https://docs.confluent.io/current/streams/developer-guide/processor-api.html[Processor API]
support is available as well.
As noted early-on, Kafka Streams support in Spring Cloud Stream is strictly only available for use in the Processor model.
A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages
can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.
=== Streams DSL
This application consumes data from a Kafka topic (e.g., `words`), computes word count for each unique word in a 5 seconds
time window, and the computed results are sent to a downstream topic (e.g., `counts`) for further processing.
[source]
----
@SpringBootApplication
@EnableBinding(KStreamProcessor.class)
public class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, WordCount> process(KStream<?, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-multi"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
----
Once built as a uber-jar (e.g., `wordcount-processor.jar`), you can run the above example like the following.
[source]
----
java -jar wordcount-processor.jar --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts
----
This application will consume messages from the Kafka topic `words` and the computed results are published to an output
topic `counts`.
Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are automatically bound as
KStream objects. As a developer, you can exclusively focus on the business aspects of the code, i.e. writing the logic
required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure
is automatically handled by the framework.
== Configuration Options
This section contains the configuration options used by the Kafka Streams binder.
For common configuration options and properties pertaining to binder, refer to the <<binding-properties,core documentation>>.
=== Kafka Streams Properties
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
literal.
configuration::
Map with a key/value pair containing properties pertaining to Apache Kafka Streams API.
This property must be prefixed with `spring.cloud.stream.kafka.streams.binder.`.
Following are some examples of using this property.
[source]
----
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
----
For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs in
Apache Kafka Streams docs.
brokers::
Broker URL
+
Default: `localhost`
zkNodes::
Zookeeper URL
+
Default: `localhost`
serdeError::
Deserialization error handler type.
Possible values are - `logAndContinue`, `logAndFail` or `sendToDlq`
+
Default: `logAndFail`
applicationId::
Convenient way to set the application.id for the Kafka Streams application globally at the binder level.
If the application contains multiple `StreamListener` methods, then application.id should be set at the binding level per input binding.
+
Default: `none`
The following properties are _only_ available for Kafka Streams producers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.` literal.
For convenience, if there multiple output bindings and they all require a common value, that can be configured by using the prefix `spring.cloud.stream.kafka.streams.default.producer.`.
keySerde::
key serde to use
+
Default: `none`.
valueSerde::
value serde to use
+
Default: `none`.
useNativeEncoding::
flag to enable native encoding
+
Default: `false`.
The following properties are _only_ available for Kafka Streams consumers and must be prefixed with `spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer.`literal.
For convenience, if there multiple input bindings and they all require a common value, that can be configured by using the prefix `spring.cloud.stream.kafka.streams.default.consumer.`.
applicationId::
Setting application.id per input binding.
+
Default: `none`
keySerde::
key serde to use
+
Default: `none`.
valueSerde::
value serde to use
+
Default: `none`.
materializedAs::
state store to materialize when using incoming KTable types
+
Default: `none`.
useNativeDecoding::
flag to enable native decoding
+
Default: `false`.
dlqName::
DLQ topic name.
+
Default: `none`.
=== TimeWindow properties:
Windowing is an important concept in stream processing applications. Following properties are available to configure
time-window computations.
spring.cloud.stream.kafka.streams.timeWindow.length::
When this property is given, you can autowire a `TimeWindows` bean into the application.
The value is expressed in milliseconds.
+
Default: `none`.
spring.cloud.stream.kafka.streams.timeWindow.advanceBy::
Value is given in milliseconds.
+
Default: `none`.
== Multiple Input Bindings
For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka
Streams binder provides multiple bindings support.
Let's see it in action.
=== Multiple Input Bindings as a Sink
[source]
----
@EnableBinding(KStreamKTableBinding.class)
.....
.....
@StreamListener
public void process(@Input("inputStream") KStream<String, PlayEvent> playEvents,
@Input("inputTable") KTable<Long, Song> songTable) {
....
....
}
interface KStreamKTableBinding {
@Input("inputStream")
KStream<?, ?> inputStream();
@Input("inputTable")
KTable<?, ?> inputTable();
}
----
In the above example, the application is written as a sink, i.e. there are no output bindings and the application has to
decide concerning downstream processing. When you write applications in this style, you might want to send the information
downstream or store them in a state store (See below for Queryable State Stores).
In the case of incoming KTable, if you want to materialize the computations to a state store, you have to express it
through the following property.
[source]
----
spring.cloud.stream.kafka.streams.bindings.inputTable.consumer.materializedAs: all-songs
----
The above example shows the use of KTable as an input binding.
The binder also supports input bindings for GlobalKTable.
GlobalKTable binding is useful when you have to ensure that all instances of your application has access to the data updates from the topic.
KTable and GlobalKTable bindings are only available on the input.
Binder supports both input and output bindings for KStream.
=== Multiple Input Bindings as a Processor
[source]
----
@EnableBinding(KStreamKTableBinding.class)
....
....
@StreamListener
@SendTo("output")
public KStream<String, Long> process(@Input("input") KStream<String, Long> userClicksStream,
@Input("inputTable") KTable<String, String> userRegionsTable) {
....
....
}
interface KStreamKTableBinding extends KafkaStreamsProcessor {
@Input("inputX")
KTable<?, ?> inputTable();
}
----
== Multiple Output Bindings (aka Branching)
Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides
support for this feature without compromising the programming model exposed through `StreamListener` in the end user application.
You can write the application in the usual way as demonstrated above in the word count example. However, when using the
branching feature, you are required to do a few things. First, you need to make sure that your return type is `KStream[]`
instead of a regular `KStream`. Second, you need to use the `SendTo` annotation containing the output bindings in the order
(see example below). For each of these output bindings, you need to configure destination, content-type etc., complying with
the standard Spring Cloud Stream expectations.
Here is an example:
[source]
----
@EnableBinding(KStreamProcessorWithBranches.class)
@EnableAutoConfiguration
public static class WordCountProcessorApplication {
@Autowired
private TimeWindows timeWindows;
@StreamListener("input")
@SendTo({"output1","output2","output3})
public KStream<?, WordCount>[] process(KStream<Object, String> input) {
Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");
Predicate<Object, WordCount> isFrench = (k, v) -> v.word.equals("french");
Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(timeWindows)
.count(Materialized.as("WordCounts-1"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))))
.branch(isEnglish, isFrench, isSpanish);
}
interface KStreamProcessorWithBranches {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}
----
Properties:
[source]
----
spring.cloud.stream.bindings.output1.contentType: application/json
spring.cloud.stream.bindings.output2.contentType: application/json
spring.cloud.stream.bindings.output3.contentType: application/json
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms: 1000
spring.cloud.stream.kafka.streams.binder.configuration:
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.bindings.output1:
destination: foo
producer:
headerMode: raw
spring.cloud.stream.bindings.output2:
destination: bar
producer:
headerMode: raw
spring.cloud.stream.bindings.output3:
destination: fox
producer:
headerMode: raw
spring.cloud.stream.bindings.input:
destination: words
consumer:
headerMode: raw
----
== Message Conversion
Similar to message-channel based binder applications, the Kafka Streams binder adapts to the out-of-the-box content-type
conversions without any compromise.
It is typical for Kafka Streams operations to know the type of SerDes used to transform the key and value correctly.
Therefore, it may be more natural to rely on the SerDe facilities provided by the Apache Kafka Streams library itself at
the inbound and outbound conversions rather than using the content-type conversions offered by the framework.
On the other hand, you might be already familiar with the content-type conversion patterns provided by the framework, and
that, you'd like to continue using for inbound and outbound conversions.
Both the options are supported in the Kafka Streams binder implementation.
==== Outbound serialization
If native encoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the outbound
in this case for outbound serialization.
Here is the property to set the contentType on the outbound.
[source]
----
spring.cloud.stream.bindings.output.contentType: application/json
----
Here is the property to enable native encoding.
[source]
----
spring.cloud.stream.bindings.output.nativeEncoding: true
----
If native encoding is enabled on the output binding (user has to enable it as above explicitly), then the framework will
skip any form of automatic message conversion on the outbound. In that case, it will switch to the Serde set by the user.
The `valueSerde` property set on the actual output binding will be used. Here is an example.
[source]
----
spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
----
If this property is not set, then it will use the "default" SerDe: `spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde`.
It is worth to mention that Kafka Streams binder does not serialize the keys on outbound - it simply relies on Kafka itself.
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
`keySerde`.
Binding level key serde:
[source]
----
spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde
----
Common Key serde:
[source]
----
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde
----
If branching is used, then you need to use multiple output bindings. For example,
[source]
----
interface KStreamProcessorWithBranches {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
----
If `nativeEncoding` is set, then you can set different SerDe's on individual output bindings as below.
[source]
----
spring.cloud.stream.kafka.streams.bindings.output1.producer.valueSerde=IntegerSerde
spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=StringSerde
spring.cloud.stream.kafka.streams.bindings.output3.producer.valueSerde=JsonSerde
----
Then if you have `SendTo` like this, @SendTo({"output1", "output2", "output3"}), the `KStream[]` from the branches are
applied with proper SerDe objects as defined above. If you are not enabling `nativeEncoding`, you can then set different
contentType values on the output bindings as below. In that case, the framework will use the appropriate message converter
to convert the messages before sending to Kafka.
[source]
----
spring.cloud.stream.bindings.output1.contentType: application/json
spring.cloud.stream.bindings.output2.contentType: application/java-serialzied-object
spring.cloud.stream.bindings.output3.contentType: application/octet-stream
----
==== Inbound Deserialization
Similar rules apply to data deserialization on the inbound.
If native decoding is disabled (which is the default), then the framework will convert the message using the contentType
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the inbound
in this case for inbound deserialization.
Here is the property to set the contentType on the inbound.
[source]
----
spring.cloud.stream.bindings.input.contentType: application/json
----
Here is the property to enable native decoding.
[source]
----
spring.cloud.stream.bindings.input.nativeDecoding: true
----
If native decoding is enabled on the input binding (user has to enable it as above explicitly), then the framework will
skip doing any message conversion on the inbound. In that case, it will switch to the SerDe set by the user. The `valueSerde`
property set on the actual output binding will be used. Here is an example.
[source]
----
spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
----
If this property is not set, it will use the default SerDe: `spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde`.
It is worth to mention that Kafka Streams binder does not deserialize the keys on inbound - it simply relies on Kafka itself.
Therefore, you either have to specify the `keySerde` property on the binding or it will default to the application-wide common
`keySerde`.
Binding level key serde:
[source]
----
spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde
----
Common Key serde:
[source]
----
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde
----
As in the case of KStream branching on the outbound, the benefit of setting value SerDe per binding is that if you have
multiple input bindings (multiple KStreams object) and they all require separate value SerDe's, then you can configure
them individually. If you use the common configuration approach, then this feature won't be applicable.
== Error Handling
Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors.
For details on this support, please see https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers[this]
Out of the box, Apache Kafka Streams provide two kinds of deserialization exception handlers - `logAndContinue` and `logAndFail`.
As the name indicates, the former will log the error and continue processing the next records and the latter will log the
error and fail. `LogAndFail` is the default deserialization exception handler.
=== Handling Deserialization Exceptions
Kafka Streams binder supports a selection of exception handlers through the following properties.
[source]
----
spring.cloud.stream.kafka.streams.binder.serdeError: logAndContinue
----
In addition to the above two deserialization exception handlers, the binder also provides a third one for sending the erroneous
records (poison pills) to a DLQ topic. Here is how you enable this DLQ exception handler.
[source]
----
spring.cloud.stream.kafka.streams.binder.serdeError: sendToDlq
----
When the above property is set, all the deserialization error records are automatically sent to the DLQ topic.
[source]
----
spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName: foo-dlq
----
If this is set, then the error records are sent to the topic `foo-dlq`. If this is not set, then it will create a DLQ
topic with the name `error.<input-topic-name>.<group-name>`.
A couple of things to keep in mind when using the exception handling feature in Kafka Streams binder.
* The property `spring.cloud.stream.kafka.streams.binder.serdeError` is applicable for the entire application. This implies
that if there are multiple `StreamListener` methods in the same application, this property is applied to all of them.
* The exception handling for deserialization works consistently with native deserialization and framework provided message
conversion.
=== Handling Non-Deserialization Exceptions
For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors.
As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get
access to the DLQ sending bean directly from your application.
Once you get access to that bean, you can programmatically send any exception records from your application to the DLQ.
It continues to remain hard to robust error handling using the high-level DSL; Kafka Streams doesn't natively support error
handling yet.
However, when you use the low-level Processor API in your application, there are options to control this behavior. See
below.
[source]
----
@Autowired
private SendToDlqAndContinue dlqHandler;
@StreamListener("input")
@SendTo("output")
public KStream<?, WordCount> process(KStream<Object, String> input) {
input.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object o, Object o2) {
try {
.....
.....
}
catch(Exception e) {
//explicitly provide the kafka topic corresponding to the input binding as the first argument.
//DLQ handler will correctly map to the dlq topic from the actual incoming destination.
dlqHandler.sendToDlq("topic-name", (byte[]) o1, (byte[]) o2, context.partition());
}
}
.....
.....
});
}
----
== State Store
State store is created automatically by Kafka Streams when the DSL is used.
When processor API is used, you need to register a state store manually. In order to do so, you can use `KafkaStreamsStateStore` annotation.
You can specify the name and type of the store, flags to control log and disabling cache, etc.
Once the store is created by the binder during the bootstrapping phase, you can access this state store through the processor API.
Below are some primitives for doing this.
Creating a state store:
[source]
----
@KafkaStreamsStateStore(name="mystate", type= KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs=300000)
public void process(KStream<Object, Product> input) {
...
}
----
Accessing the state store:
[source]
----
Processor<Object, Product>() {
WindowStore<Object, String> state;
@Override
public void init(ProcessorContext processorContext) {
state = (WindowStore)processorContext.getStateStore("mystate");
}
...
}
----
== Interactive Queries
As part of the public Kafka Streams binder API, we expose a class called `InteractiveQueryService`.
You can access this as a Spring bean in your application. An easy way to get access to this bean from your application is to "autowire" the bean.
[source]
----
@Autowired
private InteractiveQueryService interactiveQueryService;
----
Once you gain access to this bean, then you can query for the particular state-store that you are interested. See below.
[source]
----
ReadOnlyKeyValueStore<Object, Object> keyValueStore =
interactiveQueryService.getQueryableStoreType("my-store", QueryableStoreTypes.keyValueStore());
----
If there are multiple instances of the kafka streams application running, then before you can query them interactively, you need to identify which application instance hosts the key.
`InteractiveQueryService` API provides methods for identifying the host information.
In order for this to work, you must configure the property `application.server` as below:
[source]
----
spring.cloud.stream.kafka.streams.binder.configuration.application.server: <server>:<port>
----
Here are some code snippets:
[source]
----
org.apache.kafka.streams.state.HostInfo hostInfo = interactiveQueryService.getHostInfo("store-name",
key, keySerializer);
if (interactiveQueryService.getCurrentHostInfo().equals(hostInfo)) {
//query from the store that is locally available
}
else {
//query from the remote host
}
----
== Accessing the underlying KafkaStreams object
`StreamBuilderFactoryBean` from spring-kafka that is responsible for constructing the `KafkaStreams` object can be accessed programmatically.
Each `StreamBuilderFactoryBean` is registered as `stream-builder` and appended with the `StreamListener` method name.
If your `StreamListener` method is named as `process` for example, the stream builder bean is named as `stream-builder-process`.
Since this is a factory bean, it should be accessed by prepending an ampersand (`&`) when accessing it programmatically.
Following is an example and it assumes the `StreamListener` method is named as `process`
[source]
----
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
----
== State Cleanup
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
To modify this behavior simply add a single `CleanupConfig` `@Bean` (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.

View File

@@ -0,0 +1,493 @@
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs.
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
== Apache Kafka Binder Overview
The following image shows a simplified diagram of how the Apache Kafka binder operates:
.Kafka Binder
image::images/kafka-binder.png[width=300,scaledwidth="50%"]
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` 1.0.0 jar and is designed to be used with a broker of at least that version.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
=== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `DefaultKafkaHeaderMapper` that uses JSON deserialization for the headers.
+
Default: none.
[[kafka-consumer-properties]]
=== Kafka Consumer Properties
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
admin.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
+
Default: `false`.
autoCommitOffset::
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it always auto-commits (if auto-commit is enabled).
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
Also see `resetOffsets` (earlier in this list).
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
[[kafka-producer-properties]]
=== Kafka Producer Properties
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0`
+
Default: none.
admin.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See `NewTopic` javadocs in the `kafka-clients` jar.
+
Default: none.
admin.replication-factor::
The replication factor to use when provisioning new topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
=== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
==== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
----
==== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
===== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
===== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
==== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-error-channels]]
== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.

View File

@@ -0,0 +1,104 @@
== Partitioning with the Kafka Binder
Apache Kafka supports topic partitioning natively.
Sometimes it is advantageous to send data to specific partitions -- for example, when you want to strictly order message processing (all messages for a particular customer should go to the same partition).
The following example shows how to configure the producer and consumer side:
[source, java]
----
@SpringBootApplication
@EnableBinding(Source.class)
public class KafkaPartitionProducerApplication {
private static final Random RANDOM = new Random(System.currentTimeMillis());
private static final String[] data = new String[] {
"foo1", "bar1", "qux1",
"foo2", "bar2", "qux2",
"foo3", "bar3", "qux3",
"foo4", "bar4", "qux4",
};
public static void main(String[] args) {
new SpringApplicationBuilder(KafkaPartitionProducerApplication.class)
.web(false)
.run(args);
}
@InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))
public Message<?> generate() {
String value = data[RANDOM.nextInt(data.length)];
System.out.println("Sending: " + value);
return MessageBuilder.withPayload(value)
.setHeader("partitionKey", value)
.build();
}
}
----
.application.yml
[source, yaml]
----
spring:
cloud:
stream:
bindings:
output:
destination: partitioned.topic
producer:
partitioned: true
partition-key-expression: headers['partitionKey']
partition-count: 12
----
IMPORTANT: The topic must be provisioned to have enough partitions to achieve the desired concurrency for all consumer groups.
The above configuration supports up to 12 consumer instances (6 if their `concurrency` is 2, 4 if their concurrency is 3, and so on).
It is generally best to "`over-provision`" the partitions to allow for future increases in consumers or concurrency.
NOTE: The preceding configuration uses the default partitioning (`key.hashCode() % partitionCount`).
This may or may not provide a suitably balanced algorithm, depending on the key values.
You can override this default by using the `partitionSelectorExpression` or `partitionSelectorClass` properties.
Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side.
Kafka allocates partitions across the instances.
The following Spring Boot application listens to a Kafka stream and prints (to the console) the partition ID to which each message goes:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class KafkaPartitionConsumerApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(KafkaPartitionConsumerApplication.class)
.web(false)
.run(args);
}
@StreamListener(Sink.INPUT)
public void listen(@Payload String in, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println(in + " received from partition " + partition);
}
}
----
.application.yml
[source, yaml]
----
spring:
cloud:
stream:
bindings:
input:
destination: partitioned.topic
group: myGroup
----
You can add instances as needed.
Kafka rebalances the partition allocations.
If the instance count (or `instance count * concurrency`) exceeds the number of partitions, some consumers are idle.

View File

@@ -0,0 +1,3 @@
include::overview.adoc[leveloffset=+1]
include::dlq.adoc[leveloffset=+1]
include::partitions.adoc[leveloffset=+1]

View File

@@ -9,7 +9,7 @@
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@ under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -20,7 +20,7 @@ under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl"
version='1.0'>

View File

@@ -9,7 +9,7 @@ to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
@@ -22,7 +22,7 @@ under the License.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns:fo="http://www.w3.org/1999/XSL/Format"
xmlns:xslthl="http://xslthl.sourceforge.net/"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xlink='http://www.w3.org/1999/xlink'
xmlns:exsl="http://exslt.org/common"
exclude-result-prefixes="exsl xslthl d xlink"

View File

@@ -19,5 +19,5 @@
<highlighter id="properties" file="./xslthl/properties-hl.xml" />
<highlighter id="json" file="./xslthl/json-hl.xml" />
<highlighter id="yaml" file="./xslthl/yaml-hl.xml" />
<namespace prefix="xslthl" uri="http://xslthl.sourceforge.net/" />
<namespace prefix="xslthl" uri="http://xslthl.sf.net" />
</xslthl-config>

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for SH
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2010 Mathieu Malaterre
This software is provided 'as-is', without any express or implied

View File

@@ -3,7 +3,7 @@
Syntax highlighting definition for C
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for C++
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for C#
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for CSS files
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2011-2012 Martin Hujer, Michiel Hendriks
This software is provided 'as-is', without any express or implied
@@ -26,7 +26,7 @@ freely, subject to the following restrictions:
Martin Hujer <mhujer at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
Reference: https://www.w3.org/TR/CSS21/propidx.html
Reference: http://www.w3.org/TR/CSS21/propidx.html
-->
<highlighters>

View File

@@ -7,7 +7,7 @@
myxml-hl.xml - konfigurace zvyraznovace XML, ktera zvlast zvyrazni
HTML elementy a XSL elementy
This file has been customized for the Asciidoctor project (https://asciidoctor.org).
This file has been customized for the Asciidoctor project (http://asciidoctor.org).
-->
<highlighters>
<highlighter type="xml">

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for ini files
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for JavaScript
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Perl
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for PHP
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Python
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for Ruby
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied

View File

@@ -4,7 +4,7 @@
Syntax highlighting definition for SQL:1999
xslthl - XSLT Syntax Highlighting
https://sourceforge.net/projects/xslthl/
http://sourceforge.net/projects/xslthl/
Copyright (C) 2012 Michiel Hendriks, Martin Hujer, k42b3
This software is provided 'as-is', without any express or implied

View File

@@ -1,41 +1,33 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kstream11</artifactId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kstream</name>
<name>spring-cloud-stream-binder-kafka-streams</name>
<description>Kafka Streams Binder Implementation</description>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-parent</artifactId>
<version>1.3.1.BUILD-SNAPSHOT</version>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.1.0.M3</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka11-core</artifactId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-codec</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
@@ -52,12 +44,12 @@
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Added back since Kafka still depends on it, but it has been removed by Boot due to EOL -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>test</scope>
</dependency>
<dependency>
@@ -65,5 +57,24 @@
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure-processor</artifactId>
<optional>true</optional>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
</dependencies>
</project>
</project>

View File

@@ -0,0 +1,104 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.util.StringUtils;
/**
* An {@link AbstractBinder} implementation for {@link GlobalKTable}.
*
* Provides only consumer binding for the bound {@link GlobalKTable}.
* Output bindings are not allowed on this binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBinder extends
AbstractBinder<GlobalKTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
public GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name, String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name, GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for GlobalKTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
throw new UnsupportedOperationException("No producer binding is allowed and therefore no properties");
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -0,0 +1,48 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* @author Soby Chacko
* @since 2.1.0
*/
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class GlobalKTableBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new GlobalKTableBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue);
}
}

View File

@@ -0,0 +1,93 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link GlobalKTable}
*
* Input bindings are only created as output bindings on GlobalKTable are not allowed.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBoundElementFactory extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
}
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper= new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class, GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
}
@Override
public GlobalKTable createOutput(String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type GlobalKTable");
}
public interface GlobalKTableWrapper {
void wrap(GlobalKTable<Object, Object> delegate);
}
private static class GlobalKTableWrapperHandler implements GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private GlobalKTable<Object, Object> delegate;
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTable.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only GlobalKTable method invocations are permitted");
}
}
}
}

View File

@@ -0,0 +1,124 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import java.util.Optional;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.StreamsMetadata;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.util.StringUtils;
/**
* Services pertinent to the interactive query capabilities of Kafka Streams. This class provides
* services such as querying for a particular store, which instance is hosting a particular store etc.
* This is part of the public API of the kafka streams binder and the users can inject this service in their
* applications to make use of it.
*
* @author Soby Chacko
* @author Renwei Han
* @since 2.1.0
*/
public class InteractiveQueryService {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
/**
*
* @param kafkaStreamsRegistry holding {@link KafkaStreamsRegistry}
* @param binderConfigurationProperties Kafka Streams binder configuration properties
*/
public InteractiveQueryService(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Retrieve and return a queryable store by name created in the application.
*
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try{
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
//pass through
}
}
return null;
}
/**
* Gets the current {@link HostInfo} that the calling kafka streams application is running on.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* when calling this method. If this is not available, then null is returned.
*
* @return the current {@link HostInfo}
*/
public HostInfo getCurrentHostInfo() {
Map<String, String> configuration = this.binderConfigurationProperties.getConfiguration();
if (configuration.containsKey("application.server")) {
String applicationServer = configuration.get("application.server");
String[] splits = StringUtils.split(applicationServer, ":");
return new HostInfo(splits[0], Integer.valueOf(splits[1]));
}
return null;
}
/**
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may not be the
* current host that is running the application. Kafka Streams will look through all the consumer instances
* under the same application id and retrieves the proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then null maybe returned.
*
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link HostInfo} where the key for the provided store is hosted currently
*/
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
StreamsMetadata streamsMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map(k -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst()
.orElse(null);
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
}
}

View File

@@ -0,0 +1,146 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KStream}.
* This implemenation extends from the {@link AbstractBinder} directly.
* <p>
* Provides both producer and consumer bindings for the bound KStream.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamBinder extends
AbstractBinder<KStream<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KStream<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private final static Log LOG = LogFactory.getLog(KStreamBinder.class);
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KeyValueSerdeResolver keyValueSerdeResolver;
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
this.keyValueSerdeResolver = keyValueSerdeResolver;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(inputTarget, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name, KStream<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name, extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties, properties.getExtension());
to(properties.isUseNativeEncoding(), name, outboundBindTarget, (Serde<Object>) keySerde, (Serde<Object>) valueSerde);
return new DefaultBinding<>(name, null, outboundBindTarget, null);
}
@SuppressWarnings("unchecked")
private void to(boolean isNativeEncoding, String name, KStream<Object, Object> outboundBindTarget,
Serde<Object> keySerde, Serde<Object> valueSerde) {
if (!isNativeEncoding) {
LOG.info("Native encoding is disabled for " + name + ". Outbound message conversion done by Spring Cloud Stream.");
kafkaStreamsMessageConversionDelegate.serializeOnOutbound(outboundBindTarget)
.to(name, Produced.with(keySerde, valueSerde));
} else {
LOG.info("Native encoding is enabled for " + name + ". Outbound serialization done at the broker.");
outboundBindTarget.to(name, Produced.with(keySerde, valueSerde));
}
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
}
public void setKafkaStreamsExtendedBindingProperties(KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -0,0 +1,101 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import org.springframework.core.type.AnnotationMetadata;
/**
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
*/
@Configuration
@Import({KafkaAutoConfiguration.class, KStreamBinderConfiguration.KStreamMissingBeansRegistrar.class})
public class KStreamBinderConfiguration {
static class KStreamMissingBeansRegistrar extends KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar {
private static final String BEAN_NAME = "outerContext";
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
super.registerBeanDefinitions(importingClassMetadata, registry);
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition converstionDelegateBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsMessageConversionDelegate.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsMessageConversionDelegate.class.getSimpleName(), converstionDelegateBean);
AbstractBeanDefinition keyValueSerdeResolverBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KeyValueSerdeResolver.class)
.getBeanDefinition();
registry.registerBeanDefinition(KeyValueSerdeResolver.class.getSimpleName(), keyValueSerdeResolverBean);
AbstractBeanDefinition kafkaStreamsExtendedBindingPropertiesBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsExtendedBindingProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsExtendedBindingProperties.class.getSimpleName(), kafkaStreamsExtendedBindingPropertiesBean);
}
}
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public KStreamBinder kStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate KafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
KStreamBinder kStreamBinder = new KStreamBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue,
keyValueSerdeResolver);
kStreamBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kStreamBinder;
}
}

View File

@@ -0,0 +1,110 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for{@link KStream}.
*
* The implementation creates proxies for both input and output binding.
* The actual target will be created downstream through further binding process.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
KStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
super(KStream.class);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
public KStream createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
}
@Override
@SuppressWarnings("unchecked")
public KStream createOutput(final String name) {
return createProxyForKStream(name);
}
private KStream createProxyForKStream(String name) {
KStreamWrapperHandler wrapper= new KStreamWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(KStreamWrapper.class, KStream.class);
proxyFactory.addAdvice(wrapper);
KStream proxy = (KStream) proxyFactory.getProxy();
//Add the binding properties to the catalogue for later retrieval during further binding steps downstream.
BindingProperties bindingProperties = bindingServiceProperties.getBindingProperties(name);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(proxy, bindingProperties);
return proxy;
}
public interface KStreamWrapper {
void wrap(KStream<Object, Object> delegate);
}
private static class KStreamWrapperHandler implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
public void wrap(KStream<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KStream.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KStreamWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KStream method invocations are permitted");
}
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2017 the original author or authors.
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,7 +14,7 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
@@ -23,20 +23,20 @@ import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.cloud.stream.binding.StreamListenerParameterAdapter;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.messaging.Message;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
/**
* @author Marius Bogoevici
* @author Soby Chacko
*/
public class KStreamListenerParameterAdapter implements StreamListenerParameterAdapter<KStream<?,?>, KStream<?, ?>> {
class KStreamStreamListenerParameterAdapter implements StreamListenerParameterAdapter<KStream<?,?>, KStream<?, ?>> {
private final MessageConverter messageConverter;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue;
public KStreamListenerParameterAdapter(MessageConverter messageConverter) {
this.messageConverter = messageConverter;
KStreamStreamListenerParameterAdapter(KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.KafkaStreamsBindingInformationCatalogue = KafkaStreamsBindingInformationCatalogue;
}
@Override
@@ -51,25 +51,11 @@ public class KStreamListenerParameterAdapter implements StreamListenerParameterA
ResolvableType resolvableType = ResolvableType.forMethodParameter(parameter);
final Class<?> valueClass = (resolvableType.getGeneric(1).getRawClass() != null)
? (resolvableType.getGeneric(1).getRawClass()) : Object.class;
return bindingTarget.map(new KeyValueMapper() {
@Override
public Object apply(Object o, Object o2) {
if (valueClass.isAssignableFrom(o2.getClass())) {
return new KeyValue<>(o, o2);
}
else if (o2 instanceof Message) {
return new KeyValue<>(o, messageConverter.fromMessage((Message) o2, valueClass));
}
else if(o2 instanceof String || o2 instanceof byte[]) {
Message<Object> message = MessageBuilder.withPayload(o2).build();
return new KeyValue<>(o, messageConverter.fromMessage(message, valueClass));
}
else {
return new KeyValue<>(o, o2);
}
}
});
if (this.KafkaStreamsBindingInformationCatalogue.isUseNativeDecoding(bindingTarget)) {
return bindingTarget.map((KeyValueMapper) KeyValue::new);
}
else {
return kafkaStreamsMessageConversionDelegate.deserializeOnInbound(valueClass, bindingTarget);
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,23 +14,21 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream;
package org.springframework.cloud.stream.binder.kafka.streams;
import java.io.Closeable;
import java.io.IOException;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
/**
* @author Marius Bogoevici
* @author Soby Chacko
*/
public class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
class KStreamStreamListenerResultAdapter implements StreamListenerResultAdapter<KStream, KStreamBoundElementFactory.KStreamWrapper> {
@Override
public boolean supports(Class<?> resultType, Class<?> boundElement) {
@@ -40,17 +38,7 @@ public class KStreamStreamListenerResultAdapter implements StreamListenerResultA
@Override
@SuppressWarnings("unchecked")
public Closeable adapt(KStream streamListenerResult, KStreamBoundElementFactory.KStreamWrapper boundElement) {
boundElement.wrap(streamListenerResult.map(new KeyValueMapper() {
@Override
public Object apply(Object k, Object v) {
if (v instanceof Message<?>) {
return new KeyValue<>(k, v);
}
else {
return new KeyValue<>(k, MessageBuilder.withPayload(v).build());
}
}
}));
boundElement.wrap(streamListenerResult.map(KeyValue::new));
return new NoOpCloseable();
}

View File

@@ -0,0 +1,102 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.util.StringUtils;
/**
* {@link org.springframework.cloud.stream.binder.Binder} implementation for {@link KTable}.
* This implemenation extends from the {@link AbstractBinder} directly.
*
* Provides only consumer binding for the bound KTable as output bindings are not allowed on it.
*
* @author Soby Chacko
*/
class KTableBinder extends
AbstractBinder<KTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements ExtendedPropertiesBinder<KTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
KTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties, KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<KTable<Object, Object>> doBindConsumer(String name, String group, KTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = binderConfigurationProperties.getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group, inputTarget,
getApplicationContext(),
kafkaTopicProvisioner,
kafkaStreamsBindingInformationCatalogue,
binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<KTable<Object, Object>> doBindProducer(String name, KTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException("No producer level binding is allowed for KTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(String channelName) {
return this.kafkaStreamsExtendedBindingProperties.getExtendedProducerProperties(channelName);
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties.getExtendedPropertiesEntryClass();
}
}

View File

@@ -0,0 +1,64 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* @author Soby Chacko
*/
@SuppressWarnings("ALL")
@Configuration
@Import(KafkaStreamsBinderUtils.KafkaStreamsMissingBeansRegistrar.class)
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
ApplicationContext outerContext = (ApplicationContext) beanFactory.getBean("outerContext");
beanFactory.registerSingleton(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), outerContext
.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
@Bean
public KafkaTopicProvisioner provisioningProvider(KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public KTableBinder kTableBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties, kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue);
return kStreamBinder;
}
}

View File

@@ -0,0 +1,93 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for {@link KTable}
*
* Input bindings are only created as output bindings on KTable are not allowed.
*
* @author Soby Chacko
*/
class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
private final BindingServiceProperties bindingServiceProperties;
KTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
super(KTable.class);
this.bindingServiceProperties = bindingServiceProperties;
}
@Override
public KTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
//Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
KTableBoundElementFactory.KTableWrapperHandler wrapper= new KTableBoundElementFactory.KTableWrapperHandler();
ProxyFactory proxyFactory = new ProxyFactory(KTableBoundElementFactory.KTableWrapper.class, KTable.class);
proxyFactory.addAdvice(wrapper);
return (KTable) proxyFactory.getProxy();
}
@Override
@SuppressWarnings("unchecked")
public KTable createOutput(final String name) {
throw new UnsupportedOperationException("Outbound operations are not allowed on target type KTable");
}
public interface KTableWrapper {
void wrap(KTable<Object, Object> delegate);
}
private static class KTableWrapperHandler implements KTableBoundElementFactory.KTableWrapper, MethodInterceptor {
private KTable<Object, Object> delegate;
public void wrap(KTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
Assert.isNull(this.delegate, "delegate already set to " + this.delegate);
this.delegate = delegate;
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass().equals(KTable.class)) {
Assert.notNull(delegate, "Trying to prepareConsumerBinding " + methodInvocation
.getMethod() + " but no delegate has been set.");
return methodInvocation.getMethod().invoke(delegate, methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass().equals(KTableBoundElementFactory.KTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this, methodInvocation.getArguments());
}
else {
throw new IllegalStateException("Only KTable method invocations are permitted");
}
}
}
}

View File

@@ -5,7 +5,7 @@
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,12 +14,13 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kstream.config;
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -27,12 +28,12 @@ import org.springframework.context.annotation.Configuration;
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KStreamApplicationSupportProperties.class)
public class KStreamApplicationSupportAutoConfiguration {
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean
@ConditionalOnProperty("spring.cloud.stream.kstream.timeWindow.length")
public TimeWindows configuredTimeWindow(KStreamApplicationSupportProperties processorProperties) {
@ConditionalOnProperty("spring.cloud.stream.kafka.streams.timeWindow.length")
public TimeWindows configuredTimeWindow(KafkaStreamsApplicationSupportProperties processorProperties) {
return processorProperties.getTimeWindow().getAdvanceBy() > 0
? TimeWindows.of(processorProperties.getTimeWindow().getLength()).advanceBy(processorProperties.getTimeWindow().getAdvanceBy())
: TimeWindows.of(processorProperties.getTimeWindow().getLength());

View File

@@ -0,0 +1,220 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Collection;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.core.env.Environment;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
*/
@EnableConfigurationProperties(KafkaStreamsExtendedBindingProperties.class)
@ConditionalOnBean(BindingService.class)
@AutoConfigureAfter(BindingServiceConfiguration.class)
public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
public KafkaStreamsBinderConfigurationProperties binderConfigurationProperties(KafkaProperties kafkaProperties) {
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
}
@Bean
public KafkaStreamsConfiguration kafkaStreamsConfiguration(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
Environment environment) {
KafkaProperties kafkaProperties = binderConfigurationProperties.getKafkaProperties();
Map<String, Object> streamsProperties = kafkaProperties.buildStreamsProperties();
if (kafkaProperties.getStreams().getApplicationId() == null) {
String applicationName = environment.getProperty("spring.application.name");
if (applicationName != null) {
streamsProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationName);
}
}
return new KafkaStreamsConfiguration(streamsProperties);
}
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
}
else {
Object bootstrapServerConfig = properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootstrapServerConfig instanceof String) {
@SuppressWarnings("unchecked")
String bootStrapServers = (String) properties
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, binderConfigurationProperties.getKafkaConnectionString());
}
}
}
String binderProvidedApplicationId = binderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, binderProvidedApplicationId);
}
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.ByteArraySerde.class.getName());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
} else if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
properties.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
SendToDlqAndContinue.class);
}
if (!ObjectUtils.isEmpty(binderConfigurationProperties.getConfiguration())) {
properties.putAll(binderConfigurationProperties.getConfiguration());
}
return properties.entrySet().stream().collect(
Collectors.toMap(e -> String.valueOf(e.getKey()), Map.Entry::getValue));
}
@Bean
public KStreamStreamListenerResultAdapter kstreamStreamListenerResultAdapter() {
return new KStreamStreamListenerResultAdapter();
}
@Bean
public KStreamStreamListenerParameterAdapter kstreamStreamListenerParameterAdapter(
KafkaStreamsMessageConversionDelegate kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamStreamListenerParameterAdapter(kstreamBoundMessageConversionDelegate, KafkaStreamsBindingInformationCatalogue);
}
@Bean
public KafkaStreamsStreamListenerSetupMethodOrchestrator kafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KStreamStreamListenerParameterAdapter kafkaStreamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> streamListenerResultAdapters,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ObjectProvider<CleanupConfig> cleanupConfig) {
return new KafkaStreamsStreamListenerSetupMethodOrchestrator(bindingServiceProperties,
kafkaStreamsExtendedBindingProperties, keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue,
kafkaStreamListenerParameterAdapter, streamListenerResultAdapters, binderConfigurationProperties,
cleanupConfig.getIfUnique());
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverterFactory, sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public KStreamBoundElementFactory kStreamBoundElementFactory(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue) {
return new KStreamBoundElementFactory(bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue);
}
@Bean
public KTableBoundElementFactory kTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new KTableBoundElementFactory(bindingServiceProperties);
}
@Bean
public GlobalKTableBoundElementFactory globalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties) {
return new GlobalKTableBoundElementFactory(bindingServiceProperties);
}
@Bean
public SendToDlqAndContinue sendToDlqAndContinue() {
return new SendToDlqAndContinue();
}
@Bean
public KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue() {
return new KafkaStreamsBindingInformationCatalogue();
}
@Bean
@SuppressWarnings("unchecked")
public KeyValueSerdeResolver keyValueSerdeResolver(@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties) {
return new KeyValueSerdeResolver((Map<String, Object>) streamConfigGlobalProperties, kafkaStreamsBinderConfigurationProperties);
}
@Bean
public QueryableStoreRegistry queryableStoreTypeRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
return new QueryableStoreRegistry(kafkaStreamsRegistry);
}
@Bean
public InteractiveQueryService interactiveQueryServices(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new InteractiveQueryService(kafkaStreamsRegistry, binderConfigurationProperties);
}
@Bean
public KafkaStreamsRegistry kafkaStreamsRegistry() {
return new KafkaStreamsRegistry();
}
@Bean
public StreamsBuilderFactoryManager streamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new StreamsBuilderFactoryManager(kafkaStreamsBindingInformationCatalogue, kafkaStreamsRegistry);
}
}

View File

@@ -0,0 +1,109 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
import org.springframework.beans.factory.config.MethodInvokingFactoryBean;
import org.springframework.beans.factory.support.AbstractBeanDefinition;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.ImportBeanDefinitionRegistrar;
import org.springframework.core.type.AnnotationMetadata;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
*/
class KafkaStreamsBinderUtils {
static void prepareConsumerBinding(String name, String group, Object inputTarget,
ApplicationContext context,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties = new ExtendedConsumerProperties<>(
properties.getExtension());
if (binderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.sendToDlq) {
extendedConsumerProperties.getExtension().setEnableDlq(true);
}
String[] inputTopics = StringUtils.commaDelimitedListToStringArray(name);
for (String inputTopic : inputTopics) {
kafkaTopicProvisioner.provisionConsumerDestination(inputTopic, group, extendedConsumerProperties);
}
if (extendedConsumerProperties.getExtension().isEnableDlq()) {
StreamsConfig streamsConfig = kafkaStreamsBindingInformationCatalogue.getStreamsConfig(inputTarget);
KafkaStreamsDlqDispatch kafkaStreamsDlqDispatch = !StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName()) ?
new KafkaStreamsDlqDispatch(extendedConsumerProperties.getExtension().getDlqName(), binderConfigurationProperties,
extendedConsumerProperties.getExtension()) : null;
for (String inputTopic : inputTopics) {
if (StringUtils.isEmpty(extendedConsumerProperties.getExtension().getDlqName())) {
String dlqName = "error." + inputTopic + "." + group;
kafkaStreamsDlqDispatch = new KafkaStreamsDlqDispatch(dlqName, binderConfigurationProperties,
extendedConsumerProperties.getExtension());
}
SendToDlqAndContinue sendToDlqAndContinue = context.getBean(SendToDlqAndContinue.class);
sendToDlqAndContinue.addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
DeserializationExceptionHandler deserializationExceptionHandler = streamsConfig.defaultDeserializationExceptionHandler();
if (deserializationExceptionHandler instanceof SendToDlqAndContinue) {
((SendToDlqAndContinue) deserializationExceptionHandler).addKStreamDlqDispatch(inputTopic, kafkaStreamsDlqDispatch);
}
}
}
}
static class KafkaStreamsMissingBeansRegistrar implements ImportBeanDefinitionRegistrar {
private static final String BEAN_NAME = "outerContext";
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata,
BeanDefinitionRegistry registry) {
if (registry.containsBeanDefinition(BEAN_NAME)) {
AbstractBeanDefinition configBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBinderConfigurationProperties.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBinderConfigurationProperties.class.getSimpleName(), configBean);
AbstractBeanDefinition catalogueBean = BeanDefinitionBuilder.genericBeanDefinition(MethodInvokingFactoryBean.class)
.addPropertyReference("targetObject", BEAN_NAME)
.addPropertyValue("targetMethod", "getBean")
.addPropertyValue("arguments", KafkaStreamsBindingInformationCatalogue.class)
.getBeanDefinition();
registry.registerBeanDefinition(KafkaStreamsBindingInformationCatalogue.class.getSimpleName(), catalogueBean);
}
}
}
}

View File

@@ -0,0 +1,143 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.kafka.core.StreamsBuilderFactoryBean;
/**
* A catalogue that provides binding information for Kafka Streams target types such as KStream.
* It also keeps a catalogue for the underlying {@link StreamsBuilderFactoryBean} and
* {@link StreamsConfig} associated with various {@link org.springframework.cloud.stream.annotation.StreamListener}
* methods in the {@link org.springframework.context.ApplicationContext}.
*
* @author Soby Chacko
*/
class KafkaStreamsBindingInformationCatalogue {
private final Map<KStream<?, ?>, BindingProperties> bindingProperties = new ConcurrentHashMap<>();
private final Map<KStream<?, ?>, KafkaStreamsConsumerProperties> consumerProperties = new ConcurrentHashMap<>();
private final Map<Object, StreamsConfig> streamsConfigs = new ConcurrentHashMap<>();
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination
* on the broker.
*
* @param bindingTarget KStream binding target
* @return destination topic on Kafka
*/
String getDestination(KStream<?,?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
return bindingProperties.getDestination();
}
/**
* Is native decoding is enabled on this {@link KStream}.
*
* @param bindingTarget KStream binding target
* @return true if native decoding is enabled, fasle otherwise.
*/
boolean isUseNativeDecoding(KStream<?,?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
return bindingProperties.getConsumer().isUseNativeDecoding();
}
/**
* Is DLQ enabled for this {@link KStream}
*
* @param bindingTarget KStream binding target
* @return true if DLQ is enabled, false otherwise.
*/
boolean isDlqEnabled(KStream<?,?> bindingTarget) {
return consumerProperties.get(bindingTarget).isEnableDlq();
}
/**
* Retrieve the content type associated with a given {@link KStream}
*
* @param bindingTarget KStream binding target
* @return content Type associated.
*/
String getContentType(KStream<?,?> bindingTarget) {
BindingProperties bindingProperties = this.bindingProperties.get(bindingTarget);
return bindingProperties.getContentType();
}
/**
* Retrieve and return the registered {@link StreamsBuilderFactoryBean} for the given KStream
*
* @param bindingTarget KStream binding target
* @return corresponding {@link StreamsBuilderFactoryBean}
*/
StreamsConfig getStreamsConfig(Object bindingTarget) {
return streamsConfigs.get(bindingTarget);
}
/**
* Register a cache for bounded KStream -> {@link BindingProperties}
*
* @param bindingTarget KStream binding target
* @param bindingProperties {@link BindingProperties} for this KStream
*/
void registerBindingProperties(KStream<?,?> bindingTarget, BindingProperties bindingProperties) {
this.bindingProperties.put(bindingTarget, bindingProperties);
}
/**
* Register a cache for bounded KStream -> {@link KafkaStreamsConsumerProperties}
*
* @param bindingTarget KStream binding target
* @param kafkaStreamsConsumerProperties Consumer properties for this KStream
*/
void registerConsumerProperties(KStream<?,?> bindingTarget, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
this.consumerProperties.put(bindingTarget, kafkaStreamsConsumerProperties);
}
/**
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}
*
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean} mapped to the KStream
*/
void addStreamBuilderFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
this.streamsBuilderFactoryBeans.add(streamsBuilderFactoryBean);
}
void addStreamsConfigs(Object bindingTarget, StreamsConfig streamsConfig) {
this.streamsConfigs.put(bindingTarget, streamsConfig);
}
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
return streamsBuilderFactoryBeans;
}
}

View File

@@ -0,0 +1,144 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* @author Soby Chacko
* @author Rafal Zukowski
* @author Gary Russell
*/
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.ObjectUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
class KafkaStreamsDlqDispatch {
private final Log logger = LogFactory.getLog(getClass());
private final KafkaTemplate<byte[],byte[]> kafkaTemplate;
private final String dlqName;
KafkaStreamsDlqDispatch(String dlqName,
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaConsumerProperties kafkaConsumerProperties) {
ProducerFactory<byte[],byte[]> producerFactory = getProducerFactory(
new ExtendedProducerProperties<>(kafkaConsumerProperties.getDlqProducerProperties()),
kafkaBinderConfigurationProperties);
this.kafkaTemplate = new KafkaTemplate<>(producerFactory);
this.dlqName = dlqName;
}
@SuppressWarnings("unchecked")
public void sendToDlq(byte[] key, byte[] value, int partittion) {
ProducerRecord<byte[],byte[]> producerRecord = new ProducerRecord<>(this.dlqName, partittion,
key, value, null);
StringBuilder sb = new StringBuilder().append(" a message with key='")
.append(toDisplayString(ObjectUtils.nullSafeToString(key))).append("'")
.append(" and payload='")
.append(toDisplayString(ObjectUtils.nullSafeToString(value)))
.append("'").append(" received from ")
.append(partittion);
ListenableFuture<SendResult<byte[],byte[]>> sentDlq = null;
try {
sentDlq = this.kafkaTemplate.send(producerRecord);
sentDlq.addCallback(new ListenableFutureCallback<SendResult<byte[],byte[]>>() {
@Override
public void onFailure(Throwable ex) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
@Override
public void onSuccess(SendResult<byte[],byte[]> result) {
if (KafkaStreamsDlqDispatch.this.logger.isDebugEnabled()) {
KafkaStreamsDlqDispatch.this.logger.debug(
"Sent to DLQ " + sb.toString());
}
}
});
}
catch (Exception ex) {
if (sentDlq == null) {
KafkaStreamsDlqDispatch.this.logger.error(
"Error sending to DLQ " + sb.toString(), ex);
}
}
}
private DefaultKafkaProducerFactory<byte[],byte[]> getProducerFactory(ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
KafkaBinderConfigurationProperties configurationProperties) {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.ACKS_CONFIG, configurationProperties.getRequiredAcks());
Map<String, Object> mergedConfig = configurationProperties.mergedProducerConfiguration();
if (!ObjectUtils.isEmpty(mergedConfig)) {
props.putAll(mergedConfig);
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG))) {
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configurationProperties.getKafkaConnectionString());
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.BATCH_SIZE_CONFIG))) {
props.put(ProducerConfig.BATCH_SIZE_CONFIG,
String.valueOf(producerProperties.getExtension().getBufferSize()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.LINGER_MS_CONFIG))) {
props.put(ProducerConfig.LINGER_MS_CONFIG,
String.valueOf(producerProperties.getExtension().getBatchTimeout()));
}
if (ObjectUtils.isEmpty(props.get(ProducerConfig.COMPRESSION_TYPE_CONFIG))) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,
producerProperties.getExtension().getCompressionType().toString());
}
if (!ObjectUtils.isEmpty(producerProperties.getExtension().getConfiguration())) {
props.putAll(producerProperties.getExtension().getConfiguration());
}
//Always send as byte[] on dlq (the same byte[] that the consumer received)
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
private String toDisplayString(String original) {
if (original.length() <= 50) {
return original;
}
return original.substring(0, 50) + "...";
}
}

View File

@@ -0,0 +1,197 @@
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.StringUtils;
/**
* Delegate for handling all framework level message conversions inbound and outbound on {@link KStream}.
* If native encoding is not enabled, then serialization will be performed on outbound messages based
* on a contentType. Similarly, if native decoding is not enabled, deserialization will be performed on
* inbound messages based on a contentType. Based on the contentType, a {@link MessageConverter} will
* be resolved.
*
* @author Soby Chacko
*/
public class KafkaStreamsMessageConversionDelegate {
private final static Log LOG = LogFactory.getLog(KafkaStreamsMessageConversionDelegate.class);
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
private final CompositeMessageConverterFactory compositeMessageConverterFactory;
private final SendToDlqAndContinue sendToDlqAndContinue;
private final KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue;
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
KafkaStreamsMessageConversionDelegate(CompositeMessageConverterFactory compositeMessageConverterFactory,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
this.kstreamBinderConfigurationProperties = kstreamBinderConfigurationProperties;
}
/**
* Serialize {@link KStream} records on outbound based on contentType.
*
* @param outboundBindTarget outbound KStream target
* @return serialized KStream
*/
public KStream serializeOnOutbound(KStream<?,?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue.getContentType(outboundBindTarget);
MessageConverter messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
return outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v :
MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
if (!StringUtils.isEmpty(contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
return
messageConverter.toMessage(message.getPayload(),
messageHeaders).getPayload();
});
}
/**
* Deserialize incoming {@link KStream} based on contentType.
*
* @param valueClass on KStream value
* @param bindingTarget inbound KStream target
* @return deserialized KStream
*/
@SuppressWarnings("unchecked")
public KStream deserializeOnInbound(Class<?> valueClass, KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = compositeMessageConverterFactory.getMessageConverterForAllRegistered();
//Deserialize using a branching strategy
KStream<?, ?>[] branch = bindingTarget.branch(
//First filter where the message is converted and return true if everything went well, return false otherwise.
(o, o2) -> {
boolean isValidRecord = false;
try {
//if the record is a tombstone, ignore and exit from processing further.
if (o2 != null) {
if (valueClass.isAssignableFrom(o2.getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
} else if (o2 instanceof Message) {
if (valueClass.isAssignableFrom(((Message) o2).getPayload().getClass())) {
keyValueThreadLocal.set(new KeyValue<>(o, ((Message) o2).getPayload()));
} else {
convertAndSetMessage(o, valueClass, messageConverter, (Message) o2);
}
} else if (o2 instanceof String || o2 instanceof byte[]) {
Message<?> message = MessageBuilder.withPayload(o2).build();
convertAndSetMessage(o, valueClass, messageConverter, message);
} else {
keyValueThreadLocal.set(new KeyValue<>(o, o2));
}
isValidRecord = true;
}
else {
LOG.info("Received a tombstone record. This will be skipped from further processing.");
}
}
catch (Exception ignored) {
//pass through
}
return isValidRecord;
},
//second filter that catches any messages for which an exception thrown in the first filter above.
(k, v) -> true
);
//process errors from the second filter in the branch above.
processErrorFromDeserialization(bindingTarget, branch[1]);
//first branch above is the branch where the messages are converted, let it go through further processing.
return branch[0].mapValues((o2) -> {
Object objectValue = keyValueThreadLocal.get().value;
keyValueThreadLocal.remove();
return objectValue;
});
}
private void convertAndSetMessage(Object o, Class<?> valueClass, MessageConverter messageConverter, Message<?> msg) {
Object messageConverted = messageConverter.fromMessage(msg, valueClass);
if (messageConverted == null) {
throw new IllegalStateException("Inbound data conversion failed.");
}
keyValueThreadLocal.set(new KeyValue<>(o, messageConverted));
}
@SuppressWarnings("unchecked")
private void processErrorFromDeserialization(KStream<?, ?> bindingTarget, KStream<?, ?> branch) {
branch.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object o, Object o2) {
//Only continue if the record was not a tombstone.
if (o2 != null) {
if (kstreamBindingInformationCatalogue.isDlqEnabled(bindingTarget)) {
String destination = context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) message.getPayload(), context.partition());
} else {
sendToDlqAndContinue.sendToDlq(destination, (byte[]) o, (byte[]) o2, context.partition());
}
} else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndFail) {
throw new IllegalStateException("Inbound deserialization failed.");
} else if (kstreamBinderConfigurationProperties.getSerdeError() == KafkaStreamsBinderConfigurationProperties.SerdeError.logAndContinue) {
//quietly pass through. No action needed, this is similar to log and continue.
}
}
}
@Override
public void close() {
}
});
}
}

Some files were not shown because too many files have changed in this diff Show More