Compare commits

..

606 Commits

Author SHA1 Message Date
buildmaster
e7487b9ada Update SNAPSHOT to 3.1.0.M1 2020-04-07 17:18:40 +00:00
Oleg Zhurakousky
064f5b6611 Updated spring-kafka version to 2.4.4 2020-04-07 19:05:46 +02:00
Oleg Zhurakousky
ff859c5859 update maven wrapper 2020-04-07 18:45:35 +02:00
Gary Russell
445eabc59a Transactional Producer Doc Polishing
- explain txId requirements for producer-initiated transactions
2020-04-07 11:16:14 -04:00
Soby Chacko
cc5d1b1aa6 GH:851 Kafka Streams topology visualization
Add Boot actuator endpoints for topology visualization.

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/851
2020-03-25 13:02:57 +01:00
Soby Chacko
db896532e6 Offset commit when DLQ is enabled and manual ack (#871)
* Offset commit when DLQ is enabled and manual ack

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
2020-03-24 16:28:39 -04:00
Gary Russell
07a740a5b5 GH-861: Add transaction manager bean override
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/861

Add docs
2020-03-19 17:03:36 -04:00
Serhii Siryi
65f8cc5660 InteractiveQueryService API changes
* Add InteractiveQueryService.getAllHostsInfo to fetch metadata
   of all instances that handles specific store.
 * Test to verify this new API (in the existing InteractiveQueryService tests).
2020-03-05 13:24:32 -05:00
Soby Chacko
90a47a675f Update Kafka Streams docs
Fix missing property prefix in StreamListener based applicationId settings.
2020-03-05 11:40:10 -05:00
Soby Chacko
9d212024f8 Updates for 3.1.0
spring-cloud-build to 3.0.0 snapshot
spring kafka to 2.4.x snapshot
SIK 3.2.1

Remove a test that has behavior inconsistent with new changes in Spring Kafka 2.4
where all error handlers have isAckAfterHandle() true by default. The test for
auto commit offset on error without dlq was expecting this acknowledgement to not
to occur. If applications need to have the ack turned off on error, they should provide a
container customizer where it sets the ack to false. Since this is not a binder
concern, we are removing the test testDefaultAutoCommitOnErrorWithoutDlq.

Cleaning up in Kafka Streams binder tests.
2020-03-04 18:37:08 -05:00
Gary Russell
d594bab4cf GH-853: Don't propagate out "internal" headers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/853
2020-02-27 13:37:58 -05:00
Oleg Zhurakousky
e46bd1f844 Update POMs for Ivyland 2020-02-14 11:28:28 +01:00
buildmaster
5a8aad502f Bumping versions to 3.0.3.BUILD-SNAPSHOT after release 2020-02-13 08:31:42 +00:00
buildmaster
b84a0fdfc6 Going back to snapshots 2020-02-13 08:31:41 +00:00
buildmaster
e3fcddeab6 Update SNAPSHOT to 3.0.2.RELEASE 2020-02-13 08:30:55 +00:00
Soby Chacko
1b6f9f5806 KafkaStreamsBinderMetrics throws ClassCastException
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/814

Fixing an erroneous cast.
2020-02-12 11:53:19 -05:00
Oleg Zhurakousky
f397f7c313 Merge pull request #845 from sobychacko/gh-844
Kafka streams concurrency with multiple bindings
2020-02-12 16:38:53 +01:00
Oleg Zhurakousky
7e0b923c05 Merge pull request #843 from sobychacko/gh-842
Kafka Streams binder health indicator issues
2020-02-12 16:37:11 +01:00
Soby Chacko
0dc5196cb2 Fix typo in a class name 2020-02-11 17:55:50 -05:00
Soby Chacko
1cc50c1a40 Kafka streams concurrency with multiple bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844

Fixing an issue where the default concurrency settings are overridden when
there are multiple bindings present with non-default concurrency settings.
2020-02-11 13:21:58 -05:00
Adriano Scheffer
0b3d508fe2 Always call value serde configure method
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836

Remove redundant call to serde configure

Closes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/836
2020-02-10 19:57:37 -05:00
Soby Chacko
2acce00c74 Kafka Streams binder health indicator issues
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/842

When a health indicator is run against a topic with multiple partitions,
the Kafka Streams binder overwrites the information. Addressing this issue.
2020-02-10 19:42:27 -05:00
Łukasz Kamiński
ce0376ad86 GH-830: Enable usage of authorizationExceptionRetryInterval
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/830

Enable binder configuration of authorizationExceptionRetryInterval through properties
2020-01-17 11:18:02 -05:00
Soby Chacko
25ac3b75e3 Remove log4j from Kafka Streams binder 2020-01-07 17:05:00 -05:00
Gary Russell
6091a1de51 Provisioner: Fix compiler warnings
- logger is static
- missing javadocs
2020-01-06 13:52:15 -05:00
Soby Chacko
5cfce42d2e Kafka Streams multi binder configuration
In the mutli binder scenario, make KafkaBinderConfigurationProperties conditional
so that it only creates such a bean in the correspodig multi binder context.
For normal cases, KafkaBinderConfigurationProperties bean is created by the main context.
2019-12-23 18:59:16 -05:00
buildmaster
349fc85b4b Bumping versions to 3.0.2.BUILD-SNAPSHOT after release 2019-12-18 18:12:34 +00:00
buildmaster
a4762ad28b Going back to snapshots 2019-12-18 18:12:34 +00:00
buildmaster
bdd4b3e28b Update SNAPSHOT to 3.0.1.RELEASE 2019-12-18 18:11:50 +00:00
Soby Chacko
3ff99da014 Update parent s-c-build to 2.2.1.RELEASE 2019-12-18 12:07:39 -05:00
Soby Chacko
c7c05fe3c2 Docs for topic patterns in Kafka Streams binder 2019-12-17 18:43:45 -05:00
문찬용
4b50f7c2f5 Fix typo 2019-12-17 18:41:13 -05:00
stoetti
88c5aa0969 Supporting topic pattern in Kafka Streams binder
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/820

Fix checkstyle issues
2019-12-17 18:38:36 -05:00
Oleg Zhurakousky
d5a72a29d0 GH-815 polishing
Resolves #816
2019-12-17 10:55:47 +01:00
Soby Chacko
8c3cb8230b Addressing mulit binder issues with Kafka Streams
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/815

There was an issue with Kafka Streams multi binders in which the properties were not scanned
properly. The last configuation is always won, wiping out any prevous enviroment properties.
Addressing this issue by properly keeping KafkaBinderConfigurationProperties per multi binder
environment and explicity invoking Boot properties binding on them.

Adding test to verify.
2019-12-12 18:08:53 -05:00
Soby Chacko
70c1ce8970 Update image for spring.io kafka streams blog 2019-12-02 11:22:39 -05:00
Soby Chacko
1d8a4e67d2 Update image for spring.io kafka streams blog 2019-12-02 11:16:29 -05:00
buildmaster
bbfc776395 Bumping versions to 3.0.1.BUILD-SNAPSHOT after release 2019-11-22 14:51:01 +00:00
buildmaster
4e9ed30948 Going back to snapshots 2019-11-22 14:51:01 +00:00
buildmaster
34fd7a6a7a Update SNAPSHOT to 3.0.0.RELEASE 2019-11-22 14:49:08 +00:00
Oleg Zhurakousky
b23b42d874 Ignoring intermittently failing test 2019-11-22 15:36:23 +01:00
Soby Chacko
cf59cfcf12 Kafka Streams docs cleanup 2019-11-20 18:25:12 -05:00
Soby Chacko
02a4fcb144 AdminClient caching in KafkaStreams health check 2019-11-20 17:36:14 -05:00
Oleksii Mukas
6ac9c0ed23 Closing adminClient prematurely
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/803
2019-11-20 17:34:09 -05:00
Soby Chacko
78ff4f1a70 KafkaBindingProperties has no documentation (#802)
* KafkaBindingProperties has no documentation

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/739

No popup help in IDE due to missing javadocs in KafkaBindingProperties,
KafkaConsumerProperties and KafkaProducerProperties.

Some IDEs currently only show javadocs on getter methods from POJOs
used insdide a map. Therefore, some javadocs are duplicated between
fields and getter methods.

* Addressing PR review comments

* Addressing PR review comments
2019-11-14 12:50:15 -05:00
Soby Chacko
637ec2e55d Kafka Streams binder docs improvements
Adding docs for production exception handlers.
Updating configuration options section.
2019-11-13 15:19:41 -05:00
Soby Chacko
6effd58406 Adding docs for global state stores 2019-11-13 11:19:36 -05:00
Soby Chacko
7b8f0dcab7 Kafka Streams - DLQ control per consumer binding (#801)
* Kafka Streams - DLQ control per consumer binding

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/800

* Fine-grained DLQ control and deserialization exception handlers per input binding

* Deprecate KafkaStreamsBinderConfigurationProperties.SerdeError in preference to the
  new enum `KafkaStreamsBinderConfigurationProperties.DeserializationExceptionHandler`
  based properties

* Add tests, modifying docs

* Addressing PR review comments
2019-11-13 09:41:02 -05:00
Gary Russell
88912b8d6b GH-715: Add retry, dlq for transactional binders
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/715

When using transactions, binder retry, dlq cannot be used because the retry
runs within the transaction, which is undesirable if there is another resource
involved; also publishing to the DLQ could be rolled back.

Use the retry properties to configure an `AfterRollbackProcessor` to perform the
retry and DLQ publishing after the transaction has rolled back.

Add docs; move the container customizer invocation to the end.
2019-11-12 14:53:12 -05:00
Soby Chacko
34b0945d43 Changing the order of calling customizer
In the Kafka Streams binder, StreamsBuilderFactoryBean customzier was being called
prematurely before the object is created. Fixing this issue.

Add a test to verify
2019-11-11 17:21:46 -05:00
Gary Russell
278ba795d0 Configure consumer customizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1841

* Add test
2019-11-11 11:34:41 -05:00
buildmaster
bf30ecdea1 Going back to snapshots 2019-11-08 16:29:53 +00:00
buildmaster
464ce685bb Update SNAPSHOT to 3.0.0.RC2 2019-11-08 16:29:19 +00:00
Oleg Zhurakousky
54fa9a638d Added flatten plug-in 2019-11-08 16:50:10 +01:00
Soby Chacko
fefd9a3bd6 Cleanup in Kafka Streams docs 2019-11-07 18:07:27 -05:00
Soby Chacko
8e26d5e170 Fix typos in the previous PR commit 2019-11-06 21:23:19 -05:00
Soby Chacko
ac6bdc976e Reintroduce BinderHeaderMapper (#797)
* Reintrouce BinderHeaderMapper

Provide a  custom header mapper that is identical to the DefaultKafkaHeaderMapper in Spring Kafka.
This is to address some interoperability issues between Spring Cloud Stream 3.0.x and 2.x apps,
where mime types in the header are not de-serialized properly. This custom BinderHeaderMapper
will be eventually deprecated and removed once the fixes are in Spring Kafka.

Resolves #796

* Addressing review

* polishing
2019-11-06 21:14:20 -05:00
Ramesh Sharma
65386f6967 Fixed log message to print as value vs key serde 2019-11-06 09:43:08 -05:00
Oleg Zhurakousky
81c453086a Merge pull request #794 from sobychacko/gh-752
Revise docs
2019-11-06 09:11:16 +01:00
Soby Chacko
0ddd9f8f64 Revise docs
Update kafka-clients version
Revise Kafka Streams docs

Resolves #752
2019-11-05 19:49:07 -05:00
Gary Russell
d0fe596a9e GH-790: Upgrade SIK to 3.2.1
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/790
2019-11-05 11:41:40 -05:00
Soby Chacko
062bbc1cc3 Add null check for KafkaStreamsBinderMetrics 2019-11-01 18:42:30 -04:00
Soby Chacko
bc1936eb28 KafkaStreams binder metrics - duplicate entries
When the same metric name is repeated, there are some registry implementations such as the
micrometer Prometheus registry fail to register the duplicate entry. Fixing this issue by
restricting the duplicate metric names not to be registered.

Also, address an issue with multiple processors and metrics in the same application
by prepending the application ID of the Kafka Streams processor in the metric name itself.

Resolves #788
2019-11-01 15:55:30 -04:00
Soby Chacko
e2f1092173 Kafka Streams binder health indicator improvements
Caching AdminClient in Kafk Streams binder health indicator.

Resolves #791
2019-10-31 16:15:56 -04:00
Soby Chacko
06e5739fbd Addressing bugs reported by Sonarqube
Resolves #747
2019-10-25 16:42:02 -04:00
Soby Chacko
ad8e67fdc5 Ignore a test where there is a race condition 2019-10-24 12:00:22 -04:00
Soby Chacko
a3fb4cc3b3 Fix state store registration issue
Fixing the issue where state store is registered for each input binding
when multiple input bindings are present.

Resolves #785
2019-10-24 11:33:31 -04:00
Soby Chacko
7f09baf72d Enable customization on StreamsBuilderFactoryBean
Spring Kafka provides a StreamsBuilderFactoryBeanCustomizer. Use this in the binder so that the
applicatons can plugin in such a bean to further customize the StreamsBuilderFactoryBean and KafkaStreams.

Resolves #784
2019-10-24 09:35:37 -04:00
Soby Chacko
28a02cda4f Multiple functions and definition property
In order to make Kafka Streams binder based function apps more consistent
with the wider functional support in ScSt, it should require the proprety
spring.cloud.stream.fucntion.definition to signal which functions to activate.

Resolves #783
2019-10-24 01:01:34 -04:00
Soby Chacko
f96a9f884c Custom partitioner for Kafka Streams producer
* Allow the ability to plug in custom StreamPartitioner on the Kafka Streams producer.
* Fix a bug where the overriding of native encoding/decoding settings by the binder was not
  workging properly. This fix is done by providing a custom ConfigurationPropertiesBindHandlerAdvisor.
* Add test to verify

Resolves #782
2019-10-23 20:12:08 -04:00
Soby Chacko
a9020368e5 Remove the usage of BindingProvider 2019-10-23 16:32:07 -04:00
Gary Russell
ca9296dbd2 GH-628: Add dlqPartitions property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/628

Allow users to specify the number of partitions in the dead letter topic.

If the property is set to 1 and the `minPartitionCount` binder property is 1,
override the default behavior and always publish to partition 0.
2019-10-23 15:33:20 -04:00
Soby Chacko
e8d202404b Custom timestamp extractor per binding
Currenlty there is no way to pass a custom timestamp extractor
per consumer binding in Kafka Streams binder. Adding this ability.

Resolves #640
2019-10-23 15:31:17 -04:00
Artem Bilan
5794fb983c Add ProducerMessageHandlerCustomizer support
Related to https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/265
and https://github.com/spring-cloud/spring-cloud-stream/pull/1828
2019-10-23 14:30:20 -04:00
Oleg Zhurakousky
1ce1d7918f Merge pull request #777 from sobychacko/gh-774
Null value in outbound KStream
2019-10-23 16:54:21 +02:00
Oleg Zhurakousky
1264434871 Merge pull request #773 from sobychacko/gh-552
Kafka/Streams binder health indicator improvements
2019-10-23 16:53:37 +02:00
Massimiliano Poggi
e4fed0a52d Added qualifiers to CompositeMessageConverterBean injections
When more than one CompositeMessageConverter bean was defined in the same ApplicationContext,
not having the qualifiers on the injection points was causing the application to fail during
instantiation due to bean conflicts being raised. The injection points for CompositeMessageConverter
have been marked with the appropriate qualifier to inject the Spring Cloud CompositeMessageConverter.

Resolves #775
2019-10-22 17:17:03 -04:00
Soby Chacko
05e2918bc0 Addressing PR review comments 2019-10-22 17:07:51 -04:00
Gary Russell
6dadf0c104 GH-763: Add DlqPartitionFunction
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/763

Allow users to select the DLQ partition based on

* consumer group
* consumer record (which includes the original topic/partition)
* exception

Kafka Streams DLQ docs changes
2019-10-22 16:10:29 -04:00
Soby Chacko
f4dcf5100c Null value in outbound KStream
When native encoding is disabled, the conversion on outbound fails if the record
value is a null. Handle this scenario more graceful by allowing the record
to be sent downstream by skipping the conversion.

Resolves #774
2019-10-22 12:45:14 -04:00
Soby Chacko
b8eb41cb87 Kafka/Streams binder health indicator improvements
When both binders are present, there were ambiguities in the way the binders
were reporting health status. If one binder does not have any bindings, the
total health status was reported as down. Fixing these ambiguiltes as below.

If both binders have bindings present and Kafka broker is reachable, report
the status as UP and the associated details. If one of the binder does not
have bindings, but Kafka broker can be reached, then that particular binder's
status will be marked as UNKNOWN and the overall status is reported as UP.
If Kafka broker is down, then both binders are reported as DOWN and
the overall status is marked as DOWN.

Resolves #552
2019-10-21 21:59:11 -04:00
Soby Chacko
82cfd6d176 Making KafkaStreamsMetrics object Nullable 2019-10-18 21:41:16 -04:00
Tenzin Chemi
6866eef8b0 Update overview.adoc 2019-10-18 15:25:32 -04:00
Soby Chacko
b833a9f371 Fix Kafka Streams binder health indicator issues
When there are multiple Kafka Streams processors present, the health indicator
overwrites the previous processor's health info. Addressing this issue.

Resolves #771
2019-10-17 19:26:41 -04:00
Soby Chacko
0283359d4a Add a delay before the metrics assertion 2019-10-15 15:11:05 -04:00
Soby Chacko
cc2f8f6137 Assertion was not commented out in the previous commit 2019-10-10 10:38:46 -04:00
Soby Chacko
855334aaa3 Disable kafka streams metrics test temporarily 2019-10-10 01:32:51 -04:00
Soby Chacko
9d708f836a Kafka streams default single input/output binding
Address an issue in which the binder still default to binding names "input" and "output"
in case of a single function.
2019-10-09 23:58:14 -04:00
Soby Chacko
ecc8715b0c Kafka Streams binder metrics
Export Kafka Streams metrics available through KafkaStreams#metrics into a Micrometer MeterRegistry.
Add documentation for how to access metrics.
Modify test to verify metrics.

Resolves #543
2019-10-09 00:32:16 -04:00
Soby Chacko
98431ed8a0 Fix spurious warnings from InteractiveQueryService
Set applicationId properly in functions with multiple inputs
2019-10-09 00:29:03 -04:00
Oleg Zhurakousky
c7fa1ce275 Fix how stream function properties for bindings are used 2019-10-08 06:05:50 -05:00
Soby Chacko
cc8c645c5a Address checkstyle issues 2019-10-04 09:00:15 -04:00
Gary Russell
21fe9c75c5 Fix race in KafkaBinderTests
`testDlqWithNativeSerializationEnabledOnDlqProducer`
2019-10-03 10:35:05 -04:00
Soby Chacko
65dd706a6a Kafka Streams DLQ enhancements
Use DeadLetterPublishingRecoverer from Spring Kafka instead of custom DLQ components in the binder.
Remove code that is no longer needed for DLQ purposes.
In Kafka Streams, always set group to application id if the user doesn't set an explicit group.

Upgrade Spring Kafka to 2.3.0 and SIK to 3.2.0

Resolves #761
2019-10-02 22:46:51 -04:00
Soby Chacko
a02308a5a3 Allow binding names to be reused in Kafka Streams.
Allow same binding names to be reused from multiple StreamListener methods in Kafka Streams binder.

Resolves #760
2019-10-01 13:58:46 -04:00
Oleg Zhurakousky
ac75f8fecf Merge pull request #758 from sobychacko/gh-757
Function level binding properties
2019-09-30 12:42:45 -04:00
Soby Chacko
bf6a227f32 Function level binding properties
If there are multiple functions in a Kafka Streams application, and if they want to have
a separate set of configuration for each, then it should be able to set that at the function
level. For e.g. spring.cloud.stream.kafka.streams.binder.functions....

Resolves #757
2019-09-30 11:47:21 -04:00
Oleg Zhurakousky
01daa4c0dd Move function constants to core
Resolves #756
2019-09-30 11:33:18 -04:00
Soby Chacko
021943ec41 Using dot(.) character in function bindings.
In Kafka Streams functions, binding names need to use dot character instead of
underscores as the delimiter.

Resolves #755
2019-09-30 11:02:50 -04:00
Phillip Verheyden
daf4b47d1c Update the docs with the correct default channel properties locations
Spring Cloud Stream Kafka uses `spring.cloud.stream.kafka.default` for setting default properties across all channels.

Resolves #748
2019-09-30 11:01:28 -04:00
Soby Chacko
2b1be3754d Remove deprecations
Remove deprecated fields, methods and classes in preparation for the 3.0 GA Release,
both in Kafka and Kafka Streams binders.

Resolves #746
2019-09-27 15:46:00 -04:00
Soby Chacko
db5a303431 Check HeaderMapper bean with a well known name (#753)
* Check HeaderMapper bean with a well known name

* If a custom bean name for header mapper is not provided through the binder property headerMapperBeanName,
  then look for a header mapper bean with the name kafkaHeaderMapper.
* Add tests to verify

Resolves #749

* Addressing PR review comments
2019-09-27 11:08:15 -04:00
Soby Chacko
e549090787 Restoring the avro tests for Kafka Streams 2019-09-23 17:12:31 -04:00
buildmaster
7ff64098a3 Going back to snapshots 2019-09-23 18:12:19 +00:00
buildmaster
cea40bc969 Update SNAPSHOT to 3.0.0.M4 2019-09-23 18:11:47 +00:00
buildmaster
717022e274 Going back to snapshots 2019-09-23 17:49:28 +00:00
buildmaster
d0f1c8d703 Update SNAPSHOT to 3.0.0.M4 2019-09-23 17:48:54 +00:00
Soby Chacko
7a532b2bbd Temporarily disable avro tests due to Schema Registry dependency issues.
Will address these post M4 release
2019-09-23 13:37:57 -04:00
Soby Chacko
3773fa2c05 Update SIK and SK
SIK -> 3.2.0.RC1
    SK - 2.3.0.RC1
2019-09-23 13:03:12 -04:00
buildmaster
5734ce689b Bumping versions 2019-09-18 15:35:31 +00:00
Olga Maciaszek-Sharma
608086ff4d Remove spring.provides. 2019-09-18 13:34:34 +02:00
Soby Chacko
16713b3b4f Rename Serde for message converters
Rename CompositeNonNativeSerde to MessageConverterDelegateSerde
Deprecate CompositeNonNativeSerde
2019-09-16 21:27:04 -04:00
Soby Chacko
2432a7309b Remove an unnecessary mapValue call
In Kafka Streams binder, with native decoding being the default in 3.0 and going forward,
the topology depth of the Kafka Streams processors have much reduced compared to the topology
generated when using non-native decoding. There was still an extra unncessary processor in the
topology even when the deserialization done natively. Addressing that issue.

With this change, the topology generated is equivalent to the native Kafka Streams applications.

Resolves #682
2019-09-13 15:14:59 -04:00
Soby Chacko
6fdb0001d6 Update SIK to 3.2.0 snapshot
Resolves #744

* Address deprecations in the pollable consumer
* Introduce a property for pollable time out in KafkaConsuerProperties
* Fix tests
* Add docs
* Addressin PR review comments
2019-09-12 15:31:13 -04:00
Gary Russell
f2ab4a07c6 GH-70: Support Batch Listeners
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/70
2019-09-11 15:31:18 -04:00
Soby Chacko
584115580b Adding isPresent around optionals 2019-09-11 14:54:11 -04:00
Oleg Zhurakousky
752b509374 Minor cleanup and polishing
Resolves #740
2019-09-10 13:34:30 +02:00
Soby Chacko
96062b23f2 State store beans registered with StreamListener
When StreamListener based processors used in Kafka Streams
applications, it is not possible to register state store beans
using StateStoreBuilder. This is allowed in the functional model.
This method of providing custom state stores is desired as this
gives the user more flexibility in configuring Serde's and other
properties on the state store. Adding this feature to StreamListner
based Kafka Streams processors.

Adding test to verify.

Adding docs.

Resolves #676
2019-09-10 13:34:30 +02:00
Oleg Zhurakousky
da6145f382 Merge pull request #738 from sobychacko/gh-681
Multi binder issues for Kafka Streams table types
2019-09-10 12:45:04 +02:00
Soby Chacko
39a5b25b9c Topic properties not applied on kstream binding
On KStream producer binding, topic properties are not applied
by the provisioner. This is because the extended properties object
that is passed to producer binding is erroneously overwritten by a
default instance. Addressing this issue.

Resolves #684
2019-09-06 16:28:15 -04:00
Soby Chacko
4e250b34cf Multi binder issues for Kafka Streams table types
When KTable or GlobalKTable binders are used in a multi bindder
environment, it has difficulty finding certain beans/properties.
Addressing this issue.

Adding tests.

Resolves #681
2019-09-04 18:10:04 -04:00
Oleg Zhurakousky
10de84f4fe Simplify isSerdeFromStandardDefaults in KeyValueSerdeResolver
Resolves #737
2019-09-04 16:33:54 +02:00
Soby Chacko
309f588325 Addressing Kafka Streams multiple functions issues
Fixing an issue that causes a race condition when multiple functions
are present in a Kafka Streams application by isolating the responsible
proxy factory per function and not shared.

When multiple Kafka Streams functions are present in an application,
it should be possible to set the application id per function.

When an application provides a bean of type Serde, then the binder
should try to introspect that bean to see if it can be matched for
any inbound or outbound serialization.

Adding tests to verify the changes.

Adding docs.

Resolves #734, #735, #736
2019-09-03 19:48:21 -04:00
Oleg Zhurakousky
ca8e086d4c Merge pull request #728 from sobychacko/gh-706
Introducing retry for finding state stores.
2019-08-28 13:37:06 +02:00
Oleg Zhurakousky
cce392aa94 Merge pull request #730 from sobychacko/gh-657
Additional docs for dlq producer properties
2019-08-28 13:36:30 +02:00
Soby Chacko
e53b0f0de9 Fixing Kafka streams binder health indicator tests
Resolves #731
2019-08-28 13:34:34 +02:00
Soby Chacko
413cc4d2b5 Addressing PR review 2019-08-27 17:37:33 -04:00
Soby Chacko
94f76b903f Additional docs for dlq producer properties
Resolves #657
2019-08-26 17:08:01 -04:00
Soby Chacko
8d5f794461 Remove the sleep added in the previous commit 2019-08-26 13:55:52 -04:00
Soby Chacko
00c13c0035 Fixing checkstyle issues 2019-08-26 12:20:39 -04:00
Soby Chacko
e1e46226d5 Fixing a test race condition
The test testDlqWithNativeSerializationEnabledOnDlqProducer fails on CI occasionally.
Adding a sleep of 1 second to give the retrying mechanism enough time to complete.
2019-08-26 12:17:11 -04:00
Walliee
46cbeb1804 Add hook to specify sendTimeoutExpression
resolves #724
2019-08-26 10:47:27 -04:00
Soby Chacko
3e4af104b8 Introducing retry for finding state stores.
During application startup, there are cases when state stores
might take longer time to initialize. The call to find the
state store in the binder (InteractiveQueryService) will fail in
those situations. Providing a basic retry mechanism using which
applicaitons can opt-in for this retry.

Adding properties for retrying state store at the binder level.
Adding docs.
Polishing.

Resolves #706
2019-08-23 18:57:36 -04:00
Soby Chacko
16bb3e2f62 Testing topic provisioning properties for KStream
Resolves #684
2019-08-23 18:53:35 -04:00
Soby Chacko
8145ab19fb Fix checkstyle issues 2019-08-21 17:26:31 -04:00
Gary Russell
930d33aeba Fix ConsumerProducerTransactionTests with SK snap 2019-08-21 09:46:17 -04:00
Gary Russell
fe2a398b8b Fix mock producer.close() for latest SK snapshot 2019-08-21 09:38:25 -04:00
Soby Chacko
24e1fc9722 Kafka Streams functional support and autowiring
There are issues when a bean declared as a function in the Kafka Streams
application tries to autowire a bean through method parameter injection.
Addressing these concerns.

Resolves #726
2019-08-20 18:40:04 -04:00
Soby Chacko
245a43c1d8 Update Spring Kafka to 2.3.0 snapshot
Ignore two tests temporarily
Fixing Kafka Streams tests with co-partitioning issues
2019-08-20 17:40:07 -04:00
Soby Chacko
4d1fed63ee Fix checkstyle issues 2019-08-20 15:34:05 -04:00
Oleg Zhurakousky
786bd3ec62 Merge pull request #723 from sobychacko/kstream-fn-detector-changes
Function detector condition Kafka Streams
2019-08-16 18:44:39 +02:00
Soby Chacko
183f21c880 Function detector condition Kafka Streams
Use BeanFactoryUtils.beanNamesForTypeIncludingAncestors instead of
getBean from BeanFactory which forces the bean creation inside the
function detector condition. There was a race condition in which
applications were unable to autowire beans and use them in functions
while the detector condition was creating the beans. This change will
delay the creation of the function bean until it is needed.
2019-08-16 11:54:42 -04:00
Soby Chacko
18737b8fea Unignoring tests in Kafka Streams binder
Polishing tests
2019-08-14 12:57:18 -04:00
buildmaster
840745e593 Going back to snapshots 2019-08-13 07:58:51 +00:00
buildmaster
2df0377acb Update SNAPSHOT to 3.0.0.M3 2019-08-13 07:57:57 +00:00
Oleg Zhurakousky
164948ad33 Bumped spring-kafka and si-kafka snapshots to milestones 2019-08-13 09:43:09 +02:00
Oleg Zhurakousky
a472845cb4 Adjusted docs with recent s-c-build updates 2019-08-13 09:29:03 +02:00
Oleg Zhurakousky
12db5fc20e Ignored failing tests 2019-08-13 09:24:30 +02:00
Soby Chacko
46f1b41832 Interop between bootstrap server configuration
When using the Kafka Streams binder, if the application chooses to
provide bootstrap server configuration through Kafka binder broker
property, then allow that. This way either type of broker config
works in Kafka Streams binder.

Resolves #401
Resolves #720
2019-08-13 08:45:05 +02:00
Soby Chacko
64ca773a4f Kafka Streams application id changes
Generate a random application id for Kafka Streams binder if the user
doesn't set one for the application. This is useful for development
purposes, as it avoids the creation of an explicit application id.
For production workloads, it is highly recommended to explicitly provide
and application id.

The gnerated application id follows a patter where it uses the function bean
name followed by a random UUID string which is followed by the literal appplicaitonId.
In the case of StreamListener, instead of function bean name, it uses the containing class +
StreamListener method name.

If the binder generates the application id, that information will be logged on the console
at startup.

Resolves #718
Resolves #719
2019-08-13 08:43:07 +02:00
Soby Chacko
a92149f121 Adding BiConsumer support for Kafka Streams binder
Resolves #716
Resolves #717
2019-08-13 08:31:50 +02:00
Gary Russell
2721fe4d05 GH-713: Add test for e2e transactional binder
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/713

The problem was already fixed in SIK.

Resolves #714
2019-08-13 08:29:09 +02:00
Soby Chacko
8907693ffb Revamping Kafka Streams docs for functional style
Resolves #703
Resolves #721
Addressing PR review comments

Addressing PR review comments
2019-08-13 08:17:48 +02:00
Oleg Zhurakousky
ca0bfc028f minor cleanup 2019-08-08 20:59:05 +02:00
Soby Chacko
688f05cbc9 Joining two input KStreams
When joining two input KStreams, the binder throws an exceptin.
Fixing this issue by passing the wrapped target KStream from the
proxy object to the adapted StreamListener method.

Resolves #701
2019-08-08 14:19:21 -04:00
Oleg Zhurakousky
acb4180ea3 GH-1767-core changes related to the disconnection of @StreamMessageConverter 2019-08-06 19:14:49 +02:00
Soby Chacko
77e4087871 Handle Deserialization errors when there is a key (#711)
* Handle Deserialization errors when there is a key

Handle DLQ sending on deserialization errors when there is a key in the record.

Resolves #635

* Adding keySerde information in the functional bindings
2019-08-05 13:58:48 -04:00
Gary Russell
0d339e77b8 GH-683: Update docs 2019-08-02 09:30:36 -04:00
Gary Russell
799eb5be28 GH-683: MessageKey header from payload property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/683

If the `messageKeyExpression` references the payload or entire message, add an
interceptor to evaluate the expression before the payload is converted.

The interceptor is not needed when native encoding is in use because the payload
will be unchanged when it reaches the adapter.
2019-08-01 17:00:05 -04:00
Soby Chacko
2c7615db1f Temporarily ignore a test 2019-08-01 14:32:39 -04:00
Gary Russell
b2b1438ad7 Fix resetOffsets for manual partition assignment 2019-07-19 18:47:41 -04:00
Gary Russell
87399fe904 Updates for latest S-K snapshot
- change `TopicPartitionInitialOffset` to `TopicPartitionOffset`
- when resetting with manual assignment, set the SeekPosition earlier rather than
  relying on direct updat of the `ContainerProperties` field
- set `missingTopicsFatal` to `false` in mock test to avoid log messages about missing bootstrap servers
2019-07-17 18:39:53 -04:00
Soby Chacko
f03e3e17c6 Provide a CollectionSerde implementation
Resolves #702
2019-07-11 17:08:25 -04:00
Gary Russell
48aae76ec9 GH-694: Add recordMetadataChannel (send confirm.)
Resolves #694

Also fix deprecation of `ChannelInterceptorAware`.
2019-07-10 10:48:08 -04:00
Soby Chacko
6976bcd3a8 Docs polishing 2019-07-09 15:59:23 -04:00
Gary Russell
912ab14309 GH-689: Support producer-only transactions
Resolves #689
2019-07-09 15:59:23 -04:00
Oleg Zhurakousky
b4fcb791c3 General polishing and clean up after review
Resolves #692
2019-07-09 20:46:01 +02:00
Soby Chacko
ca3f20ff26 Default multiple input/output binding names in the functional support will be separated usiung underscores (_).
For e.g. process_in, process_in_0, process_out_0 etc.

Adding cleanup config to ktable integration tests.
2019-07-08 19:15:02 -04:00
Soby Chacko
97ecf48867 BiFunction support in Kafka Streams binder
* Introduce the ability to provide BiFunction beans as Kafka Streams processors.
* Bypass Spring Cloud Function catalogue for bean lookup by directly querying the bean factory.
* Add test to verify BiFunction behavior.

Resolves #690
2019-07-08 15:01:07 -04:00
Soby Chacko
4d0b62f839 Functional enhancements in Kafka Streams binder
This PR introduces some fundamental changes in the way functional model of Kafka Streams applications
are supported in the binder. For the most part, this PR is comprised of the following changes.

* Remove the usage of EnableBinding in Kafka Streams based Spring Cloud Stream applications where
  a bean of type java.util.function.Function or java.util.function.Consumer is provides (instead of
  a StreamListener). For StreamListener based Kafka Streams applications, EnableBinding is still
  necessary and required.

* Target types (KStream, KTable, GlobalKTable) will be inferred and bound by the binder through
  a new binder specific bindable proxy factory.

* By deault input bindings are named as <functionName>-input for functions with single input
  and <functionName>-input-0...<functionName>-input-n for functions with n-1 number of inputs.

* By deault output bindings are named as <functionName>-output for functions with single output
  and <functionName>-output-0...<functionName>-output-n for functions with n-1 number of outputs.

* If applications prefer custom input binding names, the defaults can be overridden through
  spring.cloud.stream.function.inputBindings.<funcationName>. Similarly if custom outpub binding names
  are needed, then that can be done through spring.cloud.streamfunction.outputBindings.<functionName>.

* Test changes

* Refactoring and polishing

Resolves #688
2019-07-08 15:01:07 -04:00
Oleg Zhurakousky
592b9a02d8 Binder changes related to GH-1729
addressed PR comment

Resolves #670
2019-07-08 17:32:52 +02:00
Soby Chacko
4ca58bea06 Polishing the previous upgrade commit
* Upgrade Kafaka versions used in tests.
 * EmbeddedKafkaRule changes in tests.
 * KafaTransactionTests changes (createProducer expectations in mockito).
 * Kafka Streams bootstrap server now return an ArrayList instead of String.
   Making the necessary changes in the binder to accommodate this.
 * Kafka Streams state store tests - retention window changes.
2019-07-06 14:24:09 -04:00
Jeff Maxwell
108ccbc572 Update kafka.version, spring-kafka.version
Update kafka.version to 2.3.0 and spring-kafka.version to 2.3.0.BUILD-SNAPSHOT

Resolves #696
2019-07-06 14:24:09 -04:00
buildmaster
303679b9f9 Going back to snapshots 2019-07-03 14:56:50 +00:00
buildmaster
35434c680b Update SNAPSHOT to 3.0.0.M2 2019-07-03 14:56:00 +00:00
Oleg Zhurakousky
e382604b04 Change s-i-kafka to 3.2.0.M3 2019-07-03 16:38:05 +02:00
Soby Chacko
bfe9529a51 Topic provisioning - Kafka streams table types
* Topic provisioning for consumers ignores topic.properties settings if binding type is KTable or GlobalKTable
* Modify tests to verify the behavior during KTable/GlobalKTable bindings
* Polishing

Resolves #687
2019-07-02 13:17:25 -04:00
Gary Russell
d8df388d9f GH-423 Add option to use KafkaHeaders.TOPIC Header
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/423

Add an option to override the default binding destination with the value
of the header, if present.
2019-06-18 16:28:38 -04:00
Gary Russell
569344afa6 GH-677: Fix resetOffsets with concurrency
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/677

The logic for resetting offsets only on the initial assignment used a simple
boolean; this is insufficient when concurrency is > 1.

Use a concurrent set instead to determine whether or not a particular topic/partition
has been sought.

Also, change the `initial` argument on `KafkaBindingRebalanceListener.onPartitionsAssigned()`
to be derived from a `ThreadLocal` and add javadocs about retaining the state by partition.

**backport to all supported versions** (Except `KafkaBindingRebalanceListener` which did
not exist before 2.1.x)

polishing
2019-06-18 16:13:56 -04:00
Oleg Zhurakousky
c1e58d187a Polishing KafkaStreamsBinderUtils
Added registerBean(..) instead of registerSingleton

Resolves #675
2019-06-18 16:21:49 +02:00
Soby Chacko
12afaf1144 Update spring-cloud-build to 2.2.0 snapshot
Update Spring Integration Kafka to 3.2.0 snapshot
2019-06-17 18:43:49 -04:00
Soby Chacko
bbb455946e Refactor common code in Kafka Streams binder.
Both StreamListener and the functional model require many code paths
that are common. Refactoring them for better use and readability.

Resolves #674
2019-06-17 18:11:11 -04:00
Soby Chacko
5dac51df62 Infer SerDes used on input/output when possible
When using native de/serializaion (which is now the default), the binder should infer the common Serde's
used on input and output. The Serdes inferred are - Integer, Long, Short, Double, Float, String, byte[] and
Spring Kafka provided JsonSerde.

Resolves #368

Address PR review comments

Addressing PR review comments

Resolves #672
2019-06-17 14:59:13 +02:00
Oleg Zhurakousky
c1db3d950c Added author tag 2019-06-13 20:19:37 +02:00
Vladislav Fefelov
4d59670096 Use single Executor Service in Kafka Binder health indicator
Resolves #665
2019-06-13 20:19:37 +02:00
Oleg Zhurakousky
d213b4ff2c Merge pull request #669 from sobychacko/gh-651
Use native Serde for Kafka Streams binder as the default
2019-06-12 18:35:03 +02:00
buildmaster
66198e345a Going back to snapshots 2019-06-11 12:07:50 +00:00
buildmaster
df8d151878 Update SNAPSHOT to 3.0.0.M1 2019-06-11 12:07:03 +00:00
Oleg Zhurakousky
d96dc8361b Bumped s-c-build to 2.2.0.M2 2019-06-11 13:47:29 +02:00
Soby Chacko
9c88b4d808 Use native Serde for Kafka Streams binder
In Kafka Streams binder, use the native Serde mechanism as the default instead of the framework
provided message conversion on the input and output bindings. This is to align applications
written using Kafka Streams binder more compatible with native concepts and mechanisms.
Users can still disable native encoding and decoding through configuration.

Amend tests to accommodate the flipped configuration.

Resolves #651
2019-06-10 21:17:38 -04:00
Oleg Zhurakousky
94be206651 Updated POMs for 3.0.0.M1 release 2019-06-10 14:49:49 +02:00
iguissouma
4ab6432f23 Use try with resources when creating AdminClient
Use try with resources when creating AdminClient to release all associated resources.
Fixes gh-660
2019-06-04 09:37:35 -04:00
Walliee
24b52809ed Switch back to use DefaultKafkaHeaderMapper
* update spring-kafka to v2.2.5
* use DefaultKafkaHeaderMapper instead of BinderHeaderMapper
* delete BinderHeaderMapper

resolves #652
2019-05-31 22:15:12 -04:00
Soby Chacko
7450d0731d Fixing ignored tests from upgrade 2019-05-28 15:02:06 -04:00
Oleg Zhurakousky
08b41f7396 Upgraded master to 3.0.0 2019-05-28 16:42:05 +02:00
Oleg Zhurakousky
e725a172ba Bumping version to 2.3.0-B-S for master 2019-05-20 06:33:35 -05:00
buildmaster
b7a3511375 Bumping versions to 2.2.1.BUILD-SNAPSHOT after release 2019-05-07 12:52:13 +00:00
buildmaster
d6c06286cd Going back to snapshots 2019-05-07 12:52:13 +00:00
buildmaster
321331103b Update SNAPSHOT to 2.2.0.RELEASE 2019-05-07 12:50:54 +00:00
Oleg Zhurakousky
f549ae069c Prep doc POM instructions for release 2019-05-07 13:38:19 +02:00
Oleg Zhurakousky
2cc60a744d Merge pull request #649 from sobychacko/gh-633
Adding docs for Kafka Streams functional support
2019-05-01 21:42:36 +02:00
Oleg Zhurakousky
2ddc837c1a polish
Resolves #648
2019-05-01 21:38:59 +02:00
Soby Chacko
816a4ec232 Kafka stream outbound content type polishing
Using Jackson to write content type as bytes so that the outgoing
content type string is properly json compatible.
2019-05-01 21:29:14 +02:00
Soby Chacko
1a19eeabec Content type not carried on the outbound (Kafka Streams)
Fixing the issue of content type not propagated as a Kafka header
on the outbound during message serialization by Kafka Streams binder.

For more info, see these links:

https://stackoverflow.com/questions/55796503/spring-cloud-stream-kafka-application-not-generating-messages-with-the-correct-a
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/456#issuecomment-439963419

Resolves #456
2019-05-01 21:29:14 +02:00
Soby Chacko
d386ff7923 Make KeyValueSerdeResolver public
Resolves #644
Resolves #645
2019-05-01 21:25:23 +02:00
Soby Chacko
9644f2dbbf Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636
2019-05-01 21:13:33 +02:00
Oleg Zhurakousky
498998f1a4 polishing
Resolves #643
2019-05-01 21:12:40 +02:00
Soby Chacko
5558f7f7fc Custom state stores with functional Kafka Streams
Introducing the ability to provide custom state stores as regular
Spring beans and use them in the functional model of Kafka Streams binding.

Resolves #642
2019-05-01 21:00:47 +02:00
Oleg Zhurakousky
dec9ff696f polishing
Resolves #637
2019-05-01 20:26:52 +02:00
Soby Chacko
5014ab8fe1 Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636

Filter in only Kafka streams function beans
2019-05-01 20:26:52 +02:00
Sabby Anandan
3d3f02b6cc Switch to KafkaStreamsProcessor
It appears we have refactored to rename the class from `KStreamProcessor` to `KafkaStreamsProcessor` [see a5344655cb (diff-4a8582ee2d07e268f77a89c0633e42f5)], but the docs weren't updated. This commit does exactly that.
2019-04-30 13:23:44 -07:00
Soby Chacko
7790d4b196 Addressing PR review comments 2019-04-29 19:29:46 -04:00
Soby Chacko
602aed8a4d Adding docs for Kafka Streams functional support
Resovles #633
2019-04-27 20:46:27 -04:00
Guillaume Lemont
c1f4cf9dc6 Fix message listener container bean naming
With multiplex topics, topics can be an array. Sine message listener container
bean name is created by calling a toString() on the topics, when it is an array
it creates an unusual bean name. Fixing the issue by creating the bean name by
using destination string directly.
2019-04-26 15:11:20 -04:00
buildmaster
c990140452 Going back to snapshots 2019-04-09 18:10:36 +00:00
buildmaster
383add504a Update SNAPSHOT to 2.2.0.RC1 2019-04-09 18:09:51 +00:00
Oleg Zhurakousky
5f9395a5ec Prepared docs for RC1 2019-04-09 19:23:29 +02:00
Soby Chacko
70eb25d413 Fix failing test 2019-04-09 13:02:57 -04:00
Oleg Zhurakousky
6bc74c6e5c Doc changes related to Rabbit's 'GH-193 Added note on default properties' 2019-04-09 18:56:15 +02:00
Soby Chacko
33603c62f0 Transactional binder producer factory
With a transactional binder, the producer factory should not be destroyed.

Resolves #626
2019-04-08 15:27:06 -04:00
Anshul Mehra
efd46835a1 GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration (#562)
* GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration

Resolves #525

* Update warning messages to be more explicit
2019-04-08 12:44:52 -04:00
Oleg Zhurakousky
3a267bc751 Merge pull request #620 from sobychacko/gh-589
Bean name conflicts (Kafka Streams binder)
2019-03-28 18:33:01 +01:00
Soby Chacko
62e98df0c7 Bean name conflicts (Kafka Streams binder)
When two processors with same name are present in the same application,
there is a bean creation conflict. Fixing that issue.

Add test to verify.
Modify existing tests.

Resolves #589
2019-03-27 16:30:22 -04:00
Matthieu Ghilain
9e156911b4 Fixing typo in documentation
Resolves #619
2019-03-26 18:49:49 +01:00
Oleg Zhurakousky
cd28454818 Updated docs pom back to snapshot 2019-03-25 18:46:05 +01:00
Oleg Zhurakousky
172d469faa Updated spring-doc-resources.version 2019-03-25 16:46:25 +01:00
Oleg Zhurakousky
74cb25a56a Prepared docs POM for M1 2019-03-25 15:39:26 +01:00
Oleg Zhurakousky
7dd4b66f58 Upgraded to s-c-build 2.1.4 2019-03-25 15:30:05 +01:00
Oleg Zhurakousky
908fb77a88 Merge pull request #586 from spring-operator/polish-urls-apache-license-master
URL Cleanup
2019-03-25 14:55:11 +01:00
Soby Chacko
5660c7cf76 Listened partitions assertions
Avoid unnecessary assertions on listened parttions when
autorebalancing is enabled, but no listened partitions are found.

Polish concurrency assignment when listened partition are empty

polishing the provisioner

Resolves #512
Resolves #587
2019-03-25 13:40:35 +01:00
Spring Operator
f8c1fb45a6 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://www.apache.org/licenses/ with 1 occurrences migrated to:
  https://www.apache.org/licenses/ ([https](https://www.apache.org/licenses/) result 200).
* [ ] http://www.apache.org/licenses/LICENSE-2.0 with 105 occurrences migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
2019-03-21 13:24:11 -05:00
Oleg Zhurakousky
73d3d79651 Minore cleanup and refactorings after previous commit
Removed KafkaStreamsFunctionProperties in favor of StreamFunctionProperties provided by core module

Resolves #537
2019-03-21 17:06:27 +01:00
Soby Chacko
792705d304 Fixing merge conflicts
Addressing checkstyle issues
Test fixes
Work around for Kafka Streams functions unnecessarily getting wrapped inside a FluxFunction.
2019-03-20 17:25:35 -04:00
Soby Chacko
0a48999e3a Initial implementation for writing Kafka Streams applications using
a functional programming model. Simple Kafka Streams processors can
be written using java.util.Function or java.util.Consumer using this
approach.

For example, beans can be defined like the following:

@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
...
}
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process() {
...
}
@Bean
public Consumer<KStream<String,Long>> process(){
...
}
etc.

Adding tests to verify the new programming model.

Resolves #428
2019-03-20 17:25:35 -04:00
Spring Operator
b7b93c1352 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed But Review Recommended
These URLs were fixed, but the https status was not OK. However, the https status was the same as the http request or http redirected to an https URL, so they were migrated. Your review is recommended.

* [ ] http://compose.docker.io/ (UnknownHostException) with 2 occurrences migrated to:
  https://compose.docker.io/ ([https](https://compose.docker.io/) result UnknownHostException).

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* [ ] http://docs.confluent.io/2.0.0/kafka/security.html with 2 occurrences migrated to:
  https://docs.confluent.io/2.0.0/kafka/security.html ([https](https://docs.confluent.io/2.0.0/kafka/security.html) result 200).
* [ ] http://github.com/ with 3 occurrences migrated to:
  https://github.com/ ([https](https://github.com/) result 200).
* [ ] http://kafka.apache.org/090/documentation.html with 4 occurrences migrated to:
  https://kafka.apache.org/090/documentation.html ([https](https://kafka.apache.org/090/documentation.html) result 200).
* [ ] http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html with 2 occurrences migrated to:
  https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html ([https](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html) result 200).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/) result 301).
* [ ] http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ with 1 occurrences migrated to:
  https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/ ([https](https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/) result 301).
* [ ] http://docs.spring.io/spring-kafka/reference/html/_reference.html with 1 occurrences migrated to:
  https://docs.spring.io/spring-kafka/reference/html/_reference.html ([https](https://docs.spring.io/spring-kafka/reference/html/_reference.html) result 301).
* [ ] http://plugins.jetbrains.com/plugin/6546 with 2 occurrences migrated to:
  https://plugins.jetbrains.com/plugin/6546 ([https](https://plugins.jetbrains.com/plugin/6546) result 301).
* [ ] http://raw.github.com/ with 1 occurrences migrated to:
  https://raw.github.com/ ([https](https://raw.github.com/) result 301).
* [ ] http://eclipse.org with 2 occurrences migrated to:
  https://eclipse.org ([https](https://eclipse.org) result 302).
* [ ] http://eclipse.org/m2e/ with 4 occurrences migrated to:
  https://eclipse.org/m2e/ ([https](https://eclipse.org/m2e/) result 302).
* [ ] http://www.springsource.com/developer/sts with 2 occurrences migrated to:
  https://www.springsource.com/developer/sts ([https](https://www.springsource.com/developer/sts) result 302).
2019-03-20 17:25:12 -04:00
Spring Operator
718cb44de7 URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://cloud.spring.io/ with 1 occurrences migrated to:
  https://cloud.spring.io/ ([https](https://cloud.spring.io/) result 200).
* http://cloud.spring.io/spring-cloud-static/ with 1 occurrences migrated to:
  https://cloud.spring.io/spring-cloud-static/ ([https](https://cloud.spring.io/spring-cloud-static/) result 200).
* http://maven.apache.org/xsd/maven-4.0.0.xsd with 6 occurrences migrated to:
  https://maven.apache.org/xsd/maven-4.0.0.xsd ([https](https://maven.apache.org/xsd/maven-4.0.0.xsd) result 200).
* http://repo.spring.io/libs-milestone-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-snapshot-local with 2 occurrences migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release with 1 occurrences migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0 with 12 occurrences
* http://www.w3.org/2001/XMLSchema-instance with 6 occurrences
2019-03-20 09:47:51 -04:00
Soby Chacko
de78bfa00b Remove unused setter
Removing an integer version of required acks from KafkaBinderConfigurationProperties

Resolves #558
2019-03-15 15:02:46 -04:00
Oleg Zhurakousky
27bb33f9e3 adjusting for home.html 2019-03-14 20:25:46 +01:00
Oleg Zhurakousky
ecd8cc587c polishing docs 2019-03-14 20:14:02 +01:00
Oleg Zhurakousky
f506da758f Upgraded doc resources version 2019-03-14 19:45:47 +01:00
Oleg Zhurakousky
12a528fd88 Polishing docs styles 2019-03-14 19:19:42 +01:00
Spring Operator
4b138c4a2f URL Cleanup
This commit updates URLs to prefer the https protocol. Redirects are not followed to avoid accidentally expanding intentionally shortened URLs (i.e. if using a URL shortener).

# Fixed URLs

## Fixed Success
These URLs were switched to an https URL with a 2xx status. While the status was successful, your review is still recommended.

* http://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch migrated to:
  https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch ([https](https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch) result 200).
* http://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin migrated to:
  https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin ([https](https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin) result 200).
* http://www.apache.org/licenses/LICENSE-2.0 migrated to:
  https://www.apache.org/licenses/LICENSE-2.0 ([https](https://www.apache.org/licenses/LICENSE-2.0) result 200).
* http://projects.spring.io/spring-cloud migrated to:
  https://projects.spring.io/spring-cloud ([https](https://projects.spring.io/spring-cloud) result 301).
* http://www.spring.io migrated to:
  https://www.spring.io ([https](https://www.spring.io) result 301).
* http://repo.spring.io/libs-milestone-local migrated to:
  https://repo.spring.io/libs-milestone-local ([https](https://repo.spring.io/libs-milestone-local) result 302).
* http://repo.spring.io/libs-release-local migrated to:
  https://repo.spring.io/libs-release-local ([https](https://repo.spring.io/libs-release-local) result 302).
* http://repo.spring.io/libs-snapshot-local migrated to:
  https://repo.spring.io/libs-snapshot-local ([https](https://repo.spring.io/libs-snapshot-local) result 302).
* http://repo.spring.io/release migrated to:
  https://repo.spring.io/release ([https](https://repo.spring.io/release) result 302).

# Ignored
These URLs were intentionally ignored.

* http://maven.apache.org/POM/4.0.0
* http://maven.apache.org/xsd/maven-4.0.0.xsd
* http://www.w3.org/2001/XMLSchema-instance
2019-03-13 18:38:53 -04:00
Oleg Zhurakousky
0af784aaa9 GH-559 Initial migration of docs to new style
Resolves #559
2019-03-13 16:40:45 +01:00
Soby Chacko
0deb8754bf Remove producer partitioned property
On the producer side, partitioned is a derived property and the applications do not
have to set this explicitly. Remove any explicit references to it from the docs.

Fixes #542
2019-03-05 13:49:23 -05:00
Soby Chacko
95a4681d27 KTable input validation
KTable as input binding doesn't work if @input is not specified at parameter level.
Restructuring the input validation in Kafka Streams binder where it checks for declarative
inputs.

Fixes #536
2019-03-04 16:20:18 -05:00
Soby Chacko
b4a2950acd KafkaStreamsStateStore with multiple input bindings
When KafkaStreamsStateStore annotation is used on a method with multiple input bindings,
it throws an exception. The reason is that each successive input binding after the first one
is trying to recreate the store that is already created. Fixing this issue.

Resolves #551
2019-02-28 16:31:38 -05:00
Gary Russell
c5c81f8148 SGH-1616: Add MessageSourceCustomizer
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1616
2019-02-28 13:44:14 -05:00
Arnaud Jardiné
d80e66d9b8 Actuator health for Kafka-streams binder
Add documentation about health indicator

Fix failing tests + add tests for multiple Kafka streams

Polishing

Resolves #544
2019-02-14 16:26:56 -05:00
Soby Chacko
86c9704ef4 Fix metrics with multi binders
Kafka binder metrics is broken with a multi-binder configuraiton.
Fixing the issues by propagating the MeterRegistry bean into the
binder context from parent.

Adding test to verify.

Removing the formatter plugin from the parent pom.

Resolves #546
Resolves #549
2019-02-14 09:25:11 +01:00
Gary Russell
63a4acda1b Fix test for SK 2.2.4
Override deserializer getters in test consumer factory because the
defaults incorrectly throw an `UnsupportedOperationException`.

See https://github.com/spring-projects/spring-kafka/pull/963
2019-02-13 16:28:13 -05:00
Gary Russell
31a6f5d7e3 GH-531: Fix test
- resolve conflict with another test that used `EnableBinding(Sink.class)`.
2019-02-07 12:44:32 -05:00
Aldo Sinanaj
4d02c38f70 GH-531: Support tombstones on input/output
GH-531: Set test timeout to 10 seconds

GH-531: Polishing - Fix @StreamListener

Added test for `@StreamListener`.
2019-02-07 09:01:54 -05:00
Oleg Zhurakousky
c22c08e259 GH-1601 satellite changes to the core 2019-02-05 07:08:48 +01:00
Soby Chacko
02e1fec3b4 Checkstyle upgrade
Upgrade checkstyle plugin to use the rules from spring-cloud-build.
Fix all the new checkstyle errors from the new rules.

Resolves #540
2019-02-04 18:11:10 -05:00
Soby Chacko
fb91b2aff9 Temporarily disabling the checkstyle plugin 2019-02-04 12:27:29 -05:00
Oleg Zhurakousky
a881b9a54a Created 2.2.x branch 2019-02-04 10:46:24 +01:00
Aldo Sinanaj
fc92a6b0af GH-389: Topic props: Use .topic instead of .admin
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/389

GH-389: Deprecated [producer/consumer].admin in favor of topic property

GH-389: Fixed comments as suggest by Gary

Polishing - 2 more deprecation warning suppressions
2019-01-29 15:16:38 -05:00
Gary Russell
d65e8ff59d GH-529: Add lz4 and docs for zstd compression
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/529

- Also log exception when getting partition information
2019-01-16 12:49:12 -05:00
buildmaster
9948674f30 Bumping versions to 2.1.1.BUILD-SNAPSHOT after release 2019-01-08 11:53:04 +00:00
buildmaster
cfc1e0e212 Going back to snapshots 2019-01-08 11:53:04 +00:00
buildmaster
4bd206f37f Update SNAPSHOT to 2.1.0.RELEASE 2019-01-08 11:51:00 +00:00
Oleg Zhurakousky
9d1d90e608 Upgraded to spring-kafka 2.2.2 2019-01-07 16:25:23 +01:00
Soby Chacko
0b8a760b5a Fix NPE in Kafka Streams binder
Fixing an NPE when type for the binder is not explicitly provided as a configuration.

Resolves #516
Resolves #524
Polishing
2019-01-03 20:26:10 +01:00
Gary Russell
d63f9e5fa6 GH-521: Fix pollable source client id
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/521

Previously, all pollable message sources got the client id `message.source`.
MBean registration failed with a warning when multiple pollable sources were present.

Use the binding name as the client id by default, overridable using the `client.id`
consumer property.

**cherry-pick to 2.0.x**

Resolves #523
2019-01-02 20:09:49 +01:00
buildmaster
c51d7e0613 Going back to snapshots 2018-12-20 19:44:59 +00:00
buildmaster
7093551a86 Update SNAPSHOT to 2.1.0.RC4 2018-12-20 19:44:23 +00:00
Oleg Zhurakousky
e9f24b00c8 Adjusting for the type conversion fixes in GH-1564 core 2018-12-18 09:36:28 +01:00
buildmaster
ed1589f957 Going back to snapshots 2018-12-13 20:26:31 +00:00
buildmaster
c5f72250b5 Update SNAPSHOT to 2.1.0.RC3 2018-12-13 20:25:50 +00:00
buildmaster
0dc871aff0 Bumping versions 2018-12-13 16:51:07 +00:00
Soby Chacko
67bcd0749a Polishing 2018-12-12 21:34:22 -05:00
Soby Chacko
6001136ec0 Handle errors for non-existing topics (#514)
* Handle errors for non-existing topics

When topic creation is disabled both on the binder and the broker,
the binder currently throws an NPE. Catching this situation and
throw a more graceful error to the user.

Adding tests to verify.

Resolves #513

* Addressing PR review comments
2018-12-12 12:44:42 -05:00
Arnaud
e93904c238 Add logging for deserialization failures. 2018-12-12 10:44:11 -05:00
Soby Chacko
43c8f02bff User header mapper with MimeTypeJsonDeserializer
Temporarily provide a custom HeaderMaper as part of the binder that is
copied from Spring Kafka so that we can preserve backward compatibility
with older producers. When older producers send non String types,
the header mapper in Spring Kafka treats that as MimeType. This change will
use a HeaderMppaer that reinstates the MimeTypeJsonDeserializer.
When we can consume the Spring Kafka version that provides the HeaderMapper
with this fix in it, we will remove this custom version.

Resolves #509
2018-11-30 10:57:41 -05:00
Soby Chacko
64ea989b08 Kafka Streams environment properties changes
When Kafka Streams binder is used in multi binder environments, the properties defined under environment
is not propagated to the auto configuration class. The environment processing only takes place when the
actual binder configuration is instantiated (for example, KStreamConfiguration), and therefore the environment
properties are unavailable during the earlier autoconfiguration. This change makes the environment properties
availble during auto configuration.

Resolves #504
2018-11-28 14:25:15 -05:00
Soby Chacko
525adef3a6 Fixing checkstyle warnings 2018-11-27 16:57:03 -05:00
Soby Chacko
1866feb75f Enable KafkaStreams bootstrap tests
Remove the usage of `ImportBeanDefinitionRegistrar` in Kafka Streams binder
components since the regular use of getBean from the outer context is safe to do so.

Unignore tests

Resolves #501

* Addressing PR review comments
2018-11-27 16:54:31 -05:00
Gary Russell
bf1c366d9b GH-502: Add test for native partitioning
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/502

Before applying the fix for,

https://github.com/spring-cloud/spring-cloud-stream/issues/1531

failed with:

```
org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'test.output'; nested exception is java.lang.IllegalArgumentException: Partition key cannot be null, failedMessage=GenericMessage [payload=byte[3], headers={kafka_partitionId=5, id=3350a823-c876-f7a9-f98b-fdbd2aaa4c12, timestamp=1542823925354}]
...
Caused by: java.lang.IllegalArgumentException: Partition key cannot be null
	at org.springframework.util.Assert.notNull(Assert.java:198)
	at org.springframework.cloud.stream.binder.PartitionHandler.extractKey(PartitionHandler.java:112)
	at org.springframework.cloud.stream.binder.PartitionHandler.determinePartition(PartitionHandler.java:93)
	at org.springframework.cloud.stream.binding.MessageConverterConfigurer$PartitioningInterceptor.preSend(MessageConverterConfigurer.java:381)
	at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:589)
	at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:435)
	... 31 more

```

Resolves #503
2018-11-23 11:44:19 +01:00
Oleg Zhurakousky
81f4a861c5 Fixed Kafka Stream binder docs 2018-11-19 16:30:35 +01:00
buildmaster
662aa71159 Going back to snapshots 2018-11-19 14:49:11 +00:00
buildmaster
8683604004 Update SNAPSHOT to 2.1.0.RC2 2018-11-19 14:48:35 +00:00
Soby Chacko
7232839cdf Ignore JAAS tests as they randomly fail on CI 2018-11-17 12:13:29 -05:00
Oleg Zhurakousky
47443f8b70 Fixed tests related to GH-1527 change in core 2018-11-15 19:33:20 +01:00
Soby Chacko
b589f32933 Checkstyle fixes
Reuse checkstyle components from core spring-cloud-stream-tools module.
Addressing checkstyle warnings in kafka streams binder module.

Resolves #489
2018-11-09 18:11:51 -05:00
Soby Chacko
20af89bc00 Fixing health indicator issues
During startup if Kafka is down, binder health indicator check
erroneously reports that Kafka is up. Fixing this issue.

Resolves #495
2018-11-09 09:22:26 -05:00
Soby Chacko
0b50a6ce2f Address checkstyle warnings
Addressing checkstyle warnings in sprnig-cloud-stream-binder-kafka.

Resolves #487
2018-11-08 10:42:26 -05:00
Gary Russell
3a396c6d37 GH-491: Fix TODO in binder for initial seeking
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/491

Use `seekToBeginning/End` instead of `0` and `Long.MAX_VALUE`.
2018-11-07 16:39:33 -05:00
rahulbats
3e39514003 Add log message for deserialization error handler
Add missing logging statement for the logAndContinue
deserialization error handler in Kafka Streams binder.

Polishing.

Resolves #494
2018-11-07 16:29:25 -05:00
Gary Russell
2144d3e26f GH-453: Add binding rebalance listener
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/453

Fix initial value of initial variable
2018-11-07 14:54:02 -05:00
Soby Chacko
2d4e9a113b Use TopicExistsException in provisioner
Instead of checking for the text TopicExistsException in the exception
message, use strong type check for TopicExistsException through instanceof on
the cause of the exception.

Adding test to verify.

Resolves #209
2018-11-05 16:20:47 -05:00
Soby Chacko
84249727ae Fix boolean expression in resetOffsets
Resolves #481
Resolves #485
2018-11-05 17:43:44 +01:00
Soby Chacko
2fc158003f Address checksyle issues in core module
Address checkstyle errors and warnings in spring-cloud-stream-binder-kafka-core.
Remove a duplicate dependency declaration from parent pom.

Resolves #483
Resolves #484
2018-11-05 17:41:39 +01:00
Soby Chacko
9c20bb4cdc Remove unnecessary auto config import.
Remove the superfluous import of PropertyPlaceholderAutoConfiguration
in KafkaBinderConfiguration.

Resovles #211
2018-11-02 15:02:32 -04:00
Oleg Zhurakousky
8dbed6b877 Going back to snapshots 2018-10-30 15:01:36 +01:00
Oleg Zhurakousky
46ee483023 Update SNAPSHOT to 2.1.0.RC1 2018-10-30 14:59:54 +01:00
Oleg Zhurakousky
3037b90fd2 Updated spring-kafka version 2018-10-30 14:51:55 +01:00
Alberto Manzaneque
cb2a074448 Fixing KafkaBinderMetric so that it reports correct lag of all topics, not just some
Resolves #478
2018-10-30 12:44:48 +01:00
Oleg Zhurakousky
d73ef9f243 Bumping versions 2018-10-30 09:28:07 +01:00
Oleg Zhurakousky
ef2b2db961 Merge pull request #480 from olegz/docs-update
Updated docs to be compliant with spring-cloud
2018-10-30 08:34:24 +01:00
Oleg Zhurakousky
f6c058f9b8 Updated docs to be compliant with spring-cloud
polishing
2018-10-29 13:18:09 +01:00
Soby Chacko
17c11fbb66 Docs update for generic binder/binding properties
Resolves #417
2018-10-22 17:23:01 -04:00
Oleg Zhurakousky
8ab289a715 polishing
Resolves #472

polishing
2018-10-22 11:40:15 +02:00
Soby Chacko
4c15cbc1d6 Support headers in Kafka Streams binder
* Use content type header from the record for binder provided inbound deserialization by
  making use of the header support added in Kafka Streams. If there is a content type set
  on the incoming record, that will get precedence.
* Introduce a new Composite Serde class for providing non-native Spring Cloud Stream specific collection
  of Serde implemenations. This is needed in order for things like avro converters that interact with
  the Spring Cloud Stream schema registry server.
* Adding tests
* Polishing

Resolves #456, #469
2018-10-22 09:57:05 +02:00
Soby Chacko
1b80cfcf4c Avoid unnecessary re-partitioning due to map calls.
Fixing streams get unnecessarily flagged for re-partitioning
from map calls on KStream.

Resolves #412
2018-10-20 18:23:32 -04:00
Soby Chacko
0d5d092f84 Fix merge issues in 40c1a8b8 2018-10-19 15:43:03 -04:00
Soby Chacko
99d739be21 Consumer concurrency settings in Kafka Streams binder (#475)
* Consumer concurrency settings in Kafka Streams binder

* If the consumer concurency settings are provided at the binding level,
  honor that before falling back to the defaults.
* Allow consumer binding specific broker configurations to be set from the application.
* Test changes.

Resolves #474

* Addressing PR review comments
2018-10-19 15:16:10 -04:00
Alberto Manzaneque
40c1a8b8f1 Setting group in TopicInformation correctly for anonymous consumers.
This makes metrics available for anonymous consumers as well.

A couple of tests to verify that TopicInformation is built correctly

Removing unneeded import
2018-10-19 14:52:09 -04:00
Soby Chacko
c43e45c7ad Addressing startOffset in Kafka Streams binder (#473)
* Addressing startOffset in Kafka Streams binder

Fixing the issue where auto.offset.reset is not honored through startOffset
provided by the individual consumer binding.

Adding integration test to override auto.offset.reset globally at the binder
level to latest and then set startOffset on individual binding to earliest.

Resolves #467

* Addressing PR review comments

* Addressing PR review comments
2018-10-19 13:29:15 -04:00
Oleg Zhurakousky
700c7a1af3 GH-470 Updated extended properties merge logic
Updated extended properties merge logic based on corresponding core updates (see GH-1503)
Resolves #470

polishing
2018-10-15 15:59:45 -04:00
Soby Chacko
9ec0cdd245 Clarify data conversion in Kafka Streams docs
Remove the usage of Messages in kafka streams binder docs.

Resolves #464
2018-10-11 15:03:33 -04:00
Soby Chacko
0227c9ecc9 Regex topics in health indicator
If the destination topic is pattern based, only do the basic up/down check
in the health indicator. In this case, the health indicator does not do any
partitions queries as it does for normal topics.

Resolves #430
2018-10-11 13:05:50 -04:00
Soby Chacko
a5cec9a25c Kafka Health Indicator changes
Disable Kafka health indicator check if management.health.binders.enabled
property is set to false. Currently, if this property is disabled, only the
core spring-cloud-stream mechanism of collecting the health checks are disabled.
The indivividual binders can still register the health check bean and boot health
actuator will still pick it up. If the user turns off health check by disabling
this property, then the stream app should completely disable any binder specific
health check.

Resolves #454
2018-10-11 09:39:31 -04:00
Soby Chacko
9443c621b0 Kafka binder JAAS config refactoring (#461)
* Kafka binder JAAS config refactoring

Refactor JaasLoginModuleConfiguraton to leverage ControlFlag enum from
spring-kafka, instead of directly using LoginModuleControlFlag from JDK.

Re-instate Jass configuration tests.

Resolves #459

* Fixing tests

* Addressing PR review comments
2018-10-10 14:53:13 -04:00
Soby Chacko
fe7bdd245d Default properties testing.
Remove camelCase in property names
2018-10-08 12:29:20 -04:00
Gary Russell
1d5fcca522 GH-455: Docs for tombstone records
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/455
2018-10-04 16:23:11 -04:00
Soby Chacko
e75967351f Address deprecations in Kafka Streams binder
* Remove the use of deprecated constructors in StreamsBuilderFactoryBean.
* Remove direct usage of StreamsConfig in favor of KafkaStreamsConfiguration.
* Change the way DLQ sending objects are provided to StreamsConfig since
  the binder does not directly deal with StreamsConfig any longer.
* Refactoring and polishing.

Resolves #442
Resolves #457
2018-10-02 19:44:14 -04:00
Oleg Zhurakousky
d44f2348e6 Polishing, renamed method
Resolves #452
2018-10-02 14:43:07 -04:00
Vladimir Porshkevich
3b4bff959f Change KafkaBinderMetrics to a Gauge instead of TimeGauge
Kafka Offset just numerical value not Time

Resolves #446
2018-10-02 14:22:51 -04:00
Soby Chacko
acb5b47a4f Update spring-kafka version to 2.2.0.RC1 2018-10-02 09:53:51 -04:00
Oleg Zhurakousky
ea810bb9e1 Removed check-style dependencies from main pom 2018-10-02 09:30:08 -04:00
Soby Chacko
e2fc45c4ce Next update version: 2.1.0.BUILD-SNAPSHOT 2018-09-21 11:37:51 -04:00
Soby Chacko
74a044b0c0 2.1.0.M3 Release 2018-09-21 11:00:17 -04:00
Soby Chacko
70f385553a Application ID resolution for Kafka streams binder (#450)
* Application ID resolution for kafka streams binder

Avoid the need to rely on group property for application id.
First, check binding specific application id, if not look at defaults.
If nothing works, fall back to the default application id set by the
boot auto configuration.

Modify tests.
Update docs.

Resolves #448

* Addressing PR review comments

* Minor polishing

* Addressing PR review comments
2018-09-21 10:17:34 -04:00
Soby Chacko
f5739a9c7f Missing beans registration in Kafka Streams binder
There are duplicate code in various binder configurations where we register missing beans.
Consolidate them into a common class that implements ImportBeanDefinitionRegistrar and then
import this class in the binder configurations.

Resolves #445
2018-09-20 11:28:32 -04:00
Soby Chacko
39f5490488 Refactor bootstrap server configuration
There is a common piece of code repeated in both producer and consumer
configuration where it is populating the bootstrap server configuration.
Refactoring into a common method.

Resolves #208
2018-09-19 16:15:19 -04:00
Soby Chacko
d445a2bff9 Docs for GlobalKTable binding
Resolves #443
2018-09-19 15:05:48 -04:00
Oleg Zhurakousky
5446485cbf Updating with changes related to core GH-1484
Resolves #447
2018-09-19 10:45:04 +02:00
Soby Chacko
cc61d916b8 Extended default properties
Allow applications to configure default values for extended prducer and
consumer properties across multiple bindings in order to avoid repetition.

Address the changes in both Kafka and Kafka Streams binders.

Add integration test to verify both binding specific and default extended properties.

Resolves #444
Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1477
2018-09-17 20:09:44 -04:00
Soby Chacko
41d9136812 GH-384: Binding support for GlobalKTable
Resolves spring-cloud/spring-cloud-stream-binder-kafka#384

* New binding targets and binder implementation for GlobalKTable within kafka-streams binder
* Refactoring existing structure to accommodate the new binder
* Adding integration test to verify the GlobalKTable behavior

Resolves #384

* Addressing PR review comments

* Update spring-kafka to 2.2.0.M3
Addressing PR review comments
Polishing

* Addressing PR review comments

* Addressing PR review comments
2018-09-14 10:51:15 -04:00
Soby Chacko
71a6a6cf28 Package structure changes for StreamBuilderFactoryBean 2018-09-12 12:05:52 -04:00
Gary Russell
36361d19bf GH-1459: Pollable Consumer and Requeue
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1459

Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1467
2018-09-10 10:22:04 -04:00
Soby Chacko
e869516e3d Handling tombstones in kafka streams binder
Handle tombstones gracefully in the kafka streams binder
Modify tests to verify

Resolves spring-cloud/spring-cloud-stream-binder-kafka#294

* Addressing PR review comments

* Addressing PR review comments
2018-09-05 12:17:54 -04:00
Gary Russell
ea288c62eb GH-435: useNativeEncoding and Transactions
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/435

All common properties should be settable on the transactional producer.
2018-09-04 15:56:26 -04:00
Soby Chacko
afa337f718 Remove deprecations in tests
Resolves #433
2018-09-04 15:28:36 -04:00
Soby Chacko
e4b08e888e Next update version: 2.1.0.BUILD-SNAPSHOT 2018-08-28 10:15:08 -04:00
Soby Chacko
de4b5a9443 2.1.0.M2 Release 2018-08-28 10:04:25 -04:00
Soby Chacko
a090614709 kafka Streams binder configuration fix
(to address core changes made in regards to Boot 2.1)

Ensure that KafkaStreamsBinderSupportAutoConfiguration runs
after BindingServiceConfiguration from core.
2018-08-14 10:33:51 -04:00
Gary Russell
a3f7ca9756 Fix mock to use poll(Duration) 2018-08-07 14:45:33 -04:00
Soby Chacko
82b07a0120 Fixing checkstyle issues 2018-08-07 14:23:47 -04:00
Soby Chacko
0d0cf8dcb7 Update to spring-cloud-build 2.1.0 snapshot
Spring Boot 2.1.0 snapshot
Spring Kafka 2.2.0/3.1.0 snapshots
Apache Kafka client 2.0.0
Fixing tests
Removing deprecations and removals
Polishing

Resolves #424
2018-08-07 14:02:52 -04:00
Soby Chacko
cbfe03be2f Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-03 08:11:52 -04:00
Soby Chacko
1f0c6cabc6 Revert "Kafka Streams binder test disabling"
This reverts commit 0c61cedc85.
2018-08-03 08:10:59 -04:00
Soby Chacko
0c61cedc85 Kafka Streams binder test disabling
Temporarily disabling KafkaBinderBootstrapTest in Kafka Streams binder
due to builds taking much longer times due to this.
2018-08-02 12:59:02 -04:00
Soby Chacko
44210f1b72 Update Kafka binder metrics docs
Fix the wrong metric name used in the Kafka binder metrics for consumer offset lag.
Update the description.

Resolves #422
2018-08-02 09:57:27 -04:00
Soby Chacko
7007f9494a JAAS initializer regression
Fix JAAS initializer with setting the missing properties.

Resoves #419

Polishing
2018-07-27 14:34:17 -04:00
Gary Russell
da268bb6dd GH-309: Use actual partition count
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/309

If more partitions exist than those configured, use the actual.

Resolves #416
2018-07-25 20:31:27 +02:00
Soby Chacko
e03baefbaf Kafka streams binder issues in custom environment
When kafka streams binder is used in the context of a custom environment, possibly
for a multi binder use case, it is unable to query the outer context as the normal
parent context is absent. This change will ensure that the binder context has access
to the outer context so that it can use any beans it needs from it in KStream or KTable
binder configuration.

Resolves #411
2018-07-25 13:14:47 -04:00
Gary Russell
01396b6573 GH-413: Configure Kafka Streams Cleanup Behavior
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/413
2018-07-24 16:53:57 -04:00
Soby Chacko
3f009a8267 Autoconfigure optimization
Add spring-boot-autoconfigure-processor to the kafka-streams binder for
auto configuration optimization.

Resolves #406
2018-07-24 15:56:48 -04:00
Gary Russell
31bb86b002 GH-404: Synchronize shared consumer
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/404

The fix for issue https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/231
added a shared consumer but the consumer is not thread safe. Add synchronization.

Also, a timeout was added to the `KafkaBinderHealthIndicator` but not to the
`KafkaBinderMetrics` which has a similar shared consumer; add a timeout there.
2018-07-11 13:30:24 -04:00
Soby Chacko
cc2dfd1d08 Upgrade kafka client to 1.1.0 (#405)
* Upgrade kafka client to 1.1.0

Upgrade kafka client to 1.1.0 for both kafka and kafka-streams binders

Resolves #370

* Address review comments
2018-07-09 17:37:58 -04:00
jmaxwell
fc768ba695 GH-402 Add additional data to DLQ message headers 2018-07-02 11:34:46 -04:00
Soby Chacko
38d6deb4d5 Provide programmatic access to KafkaStreams object
Providing access to the underlying StreamBuilderFactoryBean by making the bean name
deterministic. Eariler, the binder was using UUID to make the stream builder factory
bean names unique in the event of multiple StreamListeners. Switching to use the
method name instead to keep the StreamBuilder factory beans unique while providing
a deterministic way to giving it programmatic access.

Polishing docs

Fixes #396
2018-06-29 16:45:55 -04:00
Soby Chacko
2a0b9015de Next update version: 2.1.0.BUILD-SNAPSHOT 2018-06-27 14:11:40 -04:00
Soby Chacko
321919abc9 2.1.0.M1 2018-06-27 13:55:58 -04:00
Soby Chacko
b09def9ccc Polishing Kafka Streams binder docs
Resolves #390
2018-06-27 11:25:22 -04:00
Soby Chacko
54d7c333d3 Interactive query - polishing
Renaming InteractiveQueryServices to InteractiveQueryService
2018-06-27 11:03:46 -04:00
Soby Chacko
015b1a7fa1 Fix bad link in docs
Resolves #359
2018-06-27 10:14:17 -04:00
Soby Chacko
020821f471 Fix typo in kafka streams docs
Resolves #400
2018-06-27 09:08:21 -04:00
UltimaPhoenix
2e14ba99e3 Fix unit test
Remove unnecessary semicolon

Replace deprecated method with the new one

Test refactoring
2018-06-27 08:56:48 -04:00
Oleg Zhurakousky
bd5cc2f89d GH-398 added support for container customization
PLease see https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/139 for more details

Resolves #398

Added test and polishing
2018-06-27 08:45:31 -04:00
Gary Russell
ca2983c881 GH-373: Support multiplexed consumers
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/373

When a consumer is multiplexed, configure the container to listen to multiple topics.
Also for the polled consumer.

When using a DLQ, determine the queue name from the topic in the failed record (unless
an explicit DLQ name has been provisioned - in which case, the same DLQ will be used
for all topics.

Resolves #383
2018-06-12 14:52:51 -04:00
Soby Chacko
1b00179f25 Kafka Streams interactive query enhancements
* When running interactive queries against multiple instances under the same application id,
  ensure that the application can retrieve the proper instance that is hosting the queried state store
* Introduce a new API level service called InteractiveQueryServices
* Perform refactoring to support this enhancement
* Deprecate QueryableStoreRegistry in 2.1.0 in favor of InteractiveQueryServices

Resolves #369
2018-06-08 11:22:31 -04:00
Artem Bilan
f533177d21 GH-381: Remove duplicated SCSt-binder-test dep
Fixes spring-cloud/spring-cloud-stream-binder-kafka#381
2018-05-14 15:25:36 -04:00
Gary Russell
990a5142ce GH-339: Support topic patterns
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/339

- support regex patterns for consumer destinations to consume from multiple topics
- `enableDlq` is not available in this mode

As well as the test case, tested with a boot app...

```
spring.cloud.stream.bindings.input.destination=kbgh339.*
spring.cloud.stream.kafka.bindings.input.consumer.destination-is-pattern=true
```

and

```
2018-04-30 15:12:49.718  : partitions assigned: [kbgh339a-0]
2018-04-30 15:17:46.585  : partitions revoked: [kbgh339a-0]
2018-04-30 15:17:46.655  : partitions assigned: [kbgh339a-0, kbgh339b-0]
```

after adding a new topic matching the pattern.

Doc polishing.
2018-05-10 11:10:49 -04:00
Soby Chacko
13693e8e66 Kafka Streams DLQ related changes
DLQ handling needs to be adjusted in kafka streams binder due to the multiplexing of input topics.
This commit changes it accordingly in KStream and KTable binders.
Add tests to verify.
2018-05-01 12:02:53 -04:00
Soby Chacko
70cd7dc2f9 Upgrade spring cloud stream version to 2.1.0 sanpshot
Always enable multiplex to true in kafka streams binder
2018-05-01 10:10:42 -04:00
Sarath Shyam
894597309b Fix kaka-streams binder to consume messages from multiple input topics #361
Modified KafkaStreamsStreamListenerSetupMethodOrchestrator#getkStream
so that the KStream object is built from list of topic names
2018-04-30 15:05:47 -04:00
Thomas Cheyney
74689bf007 Reuse Kafka consumer Metrics
Polishing
2018-04-30 14:19:53 -04:00
Gary Russell
d2012c287a GH-360: Improve Binder Producer/Consumer Config
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/360

`producer-configuration` and `consumer-configuration` improperly appear in content-assist.

These are methods used by the binders to get merged configuration data (boot and binder).

Rename the methods and add `producerProperties` and `consumerProperties` to allow
configuration.
2018-04-27 12:37:48 -04:00
slamhan
109295464f QueryableStore retrieval stops at InvalidStateStoreException
If there are multiple streams, there is a code path that throws
a premature InvalidStateStoreException. Fixing that issue.

Fixes #366

Polishing.
2018-04-24 11:13:25 -04:00
Lei Chen
cf41a8a4eb Allow Kafka Streams state store creation
* Allow Kafka Streams state store creation when using process/transform method in DSL
 * Add unit test for state store
 * Address code review comments
 * Add author and javadocs
 * Integration test fixing for state store
 * Polishing
2018-04-20 15:53:43 -04:00
Soby Chacko
77540a2027 Kafka Streams initializr image for docs 2018-04-12 17:51:40 -04:00
Danish Garg
a2592698c9 Changed occurances of map calls on kafka streams to mapValues
Resolves #357
2018-04-11 15:19:21 -04:00
Oleg Zhurakousky
5543b4ed1e Post-release update to 2.1.0.BUILD-SNAPSHOT 2018-04-06 15:17:12 -04:00
Oleg Zhurakousky
acb8eef43b 2.0.0.RELEASE 2018-04-06 14:17:24 -04:00
Soby Chacko
b3f8cf41ef 2.0 release updates 2018-04-06 13:00:21 -04:00
Gary Russell
cbf693f14e Fix Stub in mock tests to return the group id
Needed for SK 2.1.5.
2018-04-06 12:44:24 -04:00
Gary Russell
39dd048ee5 Fix DLQ and raw/embedded headers
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/351

- DLQ should support embedded headers for backwards compatibility with 1.x apps
- DLQ should support `HeaderMode.none` for when using older brokers with raw data

Forward port of https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/350

Resolves #352
2018-04-06 10:56:15 -04:00
Soby Chacko
0689e87489 Polishing kafka streams docs 2018-04-04 11:25:18 -04:00
Soby Chacko
2c3787faa1 Kafka streams outbound converter changes
Use getMessageConverterForAllRegistered() from CompositeMessageConverterFactory

Resolves #353
2018-04-02 17:19:19 -04:00
Soby Chacko
84f0fb28ae Address IllegalAccessException in Kafka Streams binder
When StreamListener methods are contained in a top level non-public class, Kafka
Streams binder throws an IllegalAccessException. Fixing it by making it accessible.

Resolves #348
2018-04-02 15:32:08 -04:00
Soby Chacko
710ff2c292 Fix NPE in Kafka Streams binder
* Fix NPE in Kafka Streams binder

Fix NPE when user provided consumer properties are missing in kafka streams binder

Resolves #343

* Addressing PR review comments
2018-03-28 13:32:37 -04:00
Oleg Zhurakousky
11a275a299 Merge pull request #342 from bewithvk/patch-1
Fix the broken image link of kafka-binider.
2018-03-24 08:00:28 -04:00
bewithvk
3526a298c8 Fix the broken image link of kafka-binider. 2018-03-21 10:45:43 -05:00
Jay Bryant
e152d0c073 Correcting my own error
I inadvertently added a word (I had two sentences in mind and got parts of both of them).

Resolves #341
2018-03-21 11:01:36 -04:00
Jay Bryant
976b903352 Incorporated feedback from Gary Russell
Gary caught a grammatical goof and pointed out some content that can be removed, because Kafka now has a feature it didn't use to have. That prompted me to rewrite the leader paragraph above that content, too.

Thanks, Gary.
2018-03-20 12:20:15 -05:00
Jay Bryant
9861c80355 Full editing pass for Spring Cloud Stream Kafka Binder
I corrected grammar and spelling and edited for a corporate voice. I also add a link or two.
2018-03-20 09:19:29 -05:00
Soby Chacko
7057e225df Back to 2.0.0.BUILD-SNAPSHOT 2018-03-12 16:31:48 -04:00
Soby Chacko
b8267ea81e 2.0.0.RC3 Release 2018-03-12 15:23:09 -04:00
Gary Russell
2b595b004f GH-337: Add ackEachRecord property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/337
Resolves #338
2018-03-10 11:02:50 -05:00
Gary Russell
de45edc962 Add missing binder headerMapperBeanName to doc. 2018-03-08 11:46:03 -05:00
Gary Russell
2406fe5237 Event Publisher Polishing
Now that the abstract binder makes its event publisher available to subclasses,
use it, if present, instead of the application context.
In most cases, they will be the same object, but the user might override the
publisher.

Resolves #336
2018-03-08 09:23:14 -05:00
Oleg Zhurakousky
b5a0013e1e GH-330 Polishing
Resolves #330
Resolves #334
2018-03-08 09:11:11 -05:00
Gary Russell
c814ad5595 GH-330: Kafka Topic Provisioning Improvements
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/330

- allow override of binder-wide `replicationFactor` for each binding
- allow specific partition/replica configuration
- allow setting `NewTopic.configs()` properties, similar to the consumer and producer
- use a new `AdminClient` for provisioning (and `close()` it) instead of keeping a long-lived connection open.
2018-03-08 09:10:27 -05:00
Oleg Zhurakousky
def2c3d0ed SCST-GH-1259 Polishing
- added '@DeprecatedConfigurationProperty'
- minor doc polishing

Resolves #335
2018-03-07 16:04:44 -05:00
Gary Russell
10a44d1e44 SCST-GH-1259: Kafka Binder Doc Polishing
Fixes https://github.com/spring-cloud/spring-cloud-stream/issues/1259

Also deprecate properties that are no longer used.

Missed a save.
2018-03-07 16:03:36 -05:00
Oleg Zhurakousky
0de078ca48 GH-326 added KafkaAutoConfiguration to KafkaBinderConfiguration
- added KafkaAutoConfiguration to the @Import of KafkaBinderConfiguration
- removed 'optional' flag for KafkaProperties from KafkaBinderConfigurationProperties
- fixed KafkaBinderAutoConfigurationPropertiesTest

Resolves #326
Resolves #333
2018-03-06 13:30:41 -05:00
Gary Russell
8035e25359 GH-67: Workaround for SK GH-599
Fixes #67

Spring Kafka currently doesn't support `TPIO.SeekPosition` for initial offsets.
Instead, use 0 and `Long.MAX_VALUE` for `BEGINNING` and `END` respectively.

Resolves #331
2018-03-06 13:16:53 -05:00
Artem Bilan
bcf15ed3be GH-328: Make KafkaBinderMetrics bean conditional
Fixes: spring-cloud/spring-cloud-stream-binder-kafka#328

Since we consider a Micrometer dependency as an optional, it would be
better do not expose beans which depends of that library

* Move `KafkaBinderMetrics` to its own `@Configuration` class with
appropriate conditions on the classpath and beans presence
* Add an `ApplicationContextRunner`-based test-case to achieve a
condition when Micrometer is not in classpath via `FilteredClassLoader`
hook

Resolves #328
Resolves #332
2018-03-06 12:40:16 -05:00
Gary Russell
d37ef750ad GH-67: Reinstate resetOffsets property
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/67

Currently only supported with group management (`autoRebalanceEnabled` - default true).

See https://github.com/spring-projects/spring-kafka/issues/599

Resolves #67
Resolves #329
2018-03-05 20:50:17 -05:00
Oleg Zhurakousky
d8baca5a66 Polishing
Resolves #325
2018-03-05 20:03:24 -05:00
Jon Schneider
b9d7f1f537 Change KafkaBinderMetrics to a tagged TimeGauge 2018-03-05 19:56:12 -05:00
Oleg Zhurakousky
ad819ece92 GH-322 added @Conditional for 'KafkaBinderHealthIndicator' bean 2018-03-02 09:08:39 -05:00
Soby Chacko
e254968eaf Next version: 2.0.0.BUILD-SNAPSHOT 2018-03-01 09:30:11 -05:00
Soby Chacko
25a64e1d85 2.0.0.RC2 Release 2018-03-01 09:15:56 -05:00
Oleg Zhurakousky
353c89ab63 GH-250 Reworked binding event propagation
- Hooked up to the new `BindingCreatedEvent`

Simple polishing

Fixes spring-cloud/spring-cloud-stream-binder-kafka#250
2018-02-28 15:47:47 -05:00
Artem Bilan
227d8f39f6 GH-318: Fix KafkaBinderMetrics for Micrometer
Fixes spring-cloud/spring-cloud-stream-binder-kafka#318

* Use `ToDoubleFunction`-based `MeterRegistry.gauge()` variant to really
calculate a `lag` at runtime instead of statically defined before
* Add `KafkaBinderActuatorTests` integration test to demonstrate how
`Binder` is declared in the separate child context and how
`KafkaBinderMetrics` is not visible from the parent context.
This test also verify the real `gauge` value for the `consumer lag`
and should be used in the future to verify the `KafkaBinderMetrics`
exposure removing the code after TODO in the `KafkaMetricsTestConfig`
2018-02-27 18:51:24 -05:00
Oleg Zhurakousky
555d3ccbd8 GH-319 bumped up SIK and SK versions
Resolves #319
2018-02-27 11:30:36 -05:00
Oleg Zhurakousky
d7c5c9c13b GH-1211 made Actuator optional 2018-02-26 23:03:39 -05:00
Sabby Anandan
e23af0ccc1 Revise Kafka Streams docs (#317)
* Revise Kafka Streams docs

* Remove KStream starter-pom reference

* Remove KStreams from aggregate docs
2018-02-26 19:14:54 -05:00
Soby Chacko
3fd93e7de8 Kafka Streams binder autoconfiguration changes
Make KafkaStreamsBinderSupportAutoConfiguration conditional
on BindingSerive being present in the BeanFactory.
2018-02-23 16:09:58 -05:00
Soby Chacko
90a778da6a Next version: 2.0.0.BUILD-SNAPSHOT 2018-02-23 12:53:17 -05:00
Soby Chacko
99ff426b92 2.0.0.RC1 Release 2018-02-23 12:26:44 -05:00
Rafal Zukowski
0a59b4c628 requestedAcks propoerty changed to string to allow setting it to "all"
polishing
2018-02-22 17:26:42 -05:00
Artem Bilan
a654382e99 Upgrade to SK-2.1.3 and SIK-3.0.2 2018-02-22 16:40:56 -05:00
Soby Chacko
9a4e86a750 KafkaBinderConfigurationProperties duplicate bean
ConfigurationProperties bean provided by Kafka Streams binder
extends from `KafkaBinderConfigurationProperties` used by Kafka binder.
It creates a conflict when autowiring this bean from Kafka binder configuration.
This prevents an application to have both binders in the classpath.
Change the creation of this ConfigurationProperties bean so that it
avoids creating bean using EnbaleConfigurationProperties and then autowiring,
but directly create the Bean using `@Bean`. This prevents the conflict.

Resolves #244
Resolves #315
2018-02-22 08:54:00 -05:00
Nikem
addb40bab5 Added spring-boot-configuration-processor to generate @ConfigurationProperties metadata
Resolves #310
2018-02-16 08:46:45 -05:00
Soby Chacko
cdffcd844c Kafka Streams binder docs updates
Refactoring kafka streams docs into a separate module

Resolves #293
2018-02-16 08:35:52 -05:00
Soby Chacko
a5344655cb Kafka Streams binder name changes
- Rename spring-cloud-stream-binder-kstream to spring-cloud-stream-binder-kafka-streams
 - Corresponding changes in maven pom.xml files
 - Rename relevant classes to prefix with KafkaStreams instead of KStream
 - Corresponding package changes from org.springframework.cloud.stream.kstream to
   org.springframework.cloud.stream.kafka.streams
 - Organize all the configuration property classes in a properties package
 - Remove kstream from all the properties exposed by Apache Kafka Streams binder
 - Classes that need not be public are now moved to package access level
 - Test changes
 - More javadocs to classes

Resolves #246
2018-02-14 14:47:34 -05:00
Gary Russell
72e2aeec2a GH-305: Fix producer initiated transactions
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/305
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/308

Requires https://github.com/spring-projects/spring-integration-kafka/pull/196

`ProducerConfigurationMessageHandler` overrode `handleMessageInternal` to support
producer-initiated transactions. Now that the superclass extends `ARPMH`, this
method is no longer overridable.

Spring Integration Kafka's `KafkaProducerMessageHandler` now detects a transactional
`KafkaTemplate` and will automatically start a transaction if needed, so there is no
longer any need to override that method.

No additional tests needed; see `KafkaTransactionTests`.
2018-02-14 10:17:35 -05:00
Soby Chacko
090125fa71 Multiple Input bindings on same StreamListener method
- In kafka streams applications, it is essential to have multiple bindings for
    various target types such as KStream, KTable etc. These changes allow to have
    more than one type of target type bindings on a single StreamListenerMethod.
  - Currently support KStream and KTable target types
  - Refactoring the KStream listener orchestrator strategy
  - Input bindings are initiated through a proxy and later on wrapped with the
    real target created from a common Kafka Streams StreamsBuilder
  - Adding tests to verify multiple input bindings for KStream and KTable

  Resolves #298
  Resolves #303
Polishing

Materializing KTables as state stores
Void return types on kafka streams StreamListener methods
Polishing
2018-02-10 12:16:23 -05:00
Oleg Zhurakousky
0fcb972cb6 GH-301 polished POM
Resolves #302
2018-02-10 12:05:28 -05:00
Gary Russell
dbe19776f5 GH-301: Fix Binder Kafka Properties Overrides
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/301

For the `AdminClient`, arbitrary Kafka properties (set via the binder `configuration` property) should
supersede any boot properties. There was already special handling for the bootstrap servers, but
other arbitrary properties were ignored.

Add tests, including a test to verify the proper override of boot's broker list if appropriate.
2018-02-08 16:54:32 -05:00
Soby Chacko
f44923e480 Introducing error handling for Kafka Streams binder
- Add support for KIP-161: streams deserialization exception handlers
 - Provide out of the box LogAndContinue and LogAndFail exception handlers
 - Introduce a new exception handler that sends records in error on
   deserialization to a DLQ
 - Ensure that the exception handlers work in both when native decoding
   is enabled (Kafka is doing deserialization) and when native decoding
   is disabled (Binder is doing deserialization)
 - Enhancements on the logic of native decoding in binder.
 - General refactoring and cleanup
 - Adding more tests

 Resolves #275, #281

Introducing multiple StreamListener methods in Kafka Streams apps.

 - Multiple stream listener methods now have individual backing
   StreamBuilder and configuration.
 - Register StreamsBuilderFactoryBean programmatically

 Resolves #299, #300
2018-02-07 05:29:47 -06:00
Soby Chacko
b31f902905 Back to 2.0.0.BUILD-SNAPSHOT 2018-02-05 13:56:31 -05:00
Soby Chacko
13cc4e0155 Update for 2.0.0.M4 Release 2018-02-05 13:41:45 -05:00
Soby Chacko
5e0688df72 Doc structure changes
Introduce a top level doc that aggregates various sub docs
2018-02-05 12:59:18 -05:00
Oleg Zhurakousky
abd43cfffa fixed KafkaBinderTest to accomodate DefaultPollableMessageSource's signature change 2018-02-04 13:30:41 -05:00
Oleg Zhurakousky
8057b1f764 GH-295 Fixed tests to comply with double-conversion related changes in core
See https://github.com/spring-cloud/spring-cloud-stream/issues/1130
See https://github.com/spring-cloud/spring-cloud-stream/issues/1071

Resolves #295
2018-02-03 08:06:37 -05:00
Soby Chacko
78ae3d1867 Update dependencies
Spring Kafka to 2.1.2.RELEASE
Spring Integration Kafka to 3.0.1.RELEASE
Test compile fixes from micrometer class changes in KafkaMetrics tests
2018-02-02 13:32:02 -05:00
Soby Chacko
a7378c9132 Addressing PR review comments
Resolves #274
2018-01-24 10:51:37 -05:00
Soby Chacko
9500ccfe46 KStream binder - nativeEncoding and Serde changes 2018-01-24 10:07:31 -05:00
Soby Chacko
3a5aed61c9 Redesigning the branching support in KStream binder
Instead of relying on a property based approach, use proper
multiple output bindings to support the branching feature.
This is accomplished through a combination of using `SendTo`
annotation with multiple outputs and overriding the default
StreamListener method setup orchestration in the binder.
2018-01-24 10:07:31 -05:00
Soby Chacko
86146bfe81 Support for KStream branch and split
Resolves #265

- Ensure that branching can be done and any message conversions
  done on all outbound data to multiple topics
- Refine MessageConversion logic in KStream binder
- Refactor message conversion code into a specific class
- Cleanup and refactoring
- Test to verify branching support
- More refinements in the way nativeEncoding/nativeDecoding logic works in KStream binder
- Doc changes for KStream binder
2018-01-24 10:07:31 -05:00
Gary Russell
f2abb59b19 Fix Polling Consumer Tests
- Add a sleep to the while loop
- Unbind at the end of the test

Resolves #289
2018-01-24 10:01:02 -05:00
Gary Russell
36c39974ad Fix default partition header expression 2018-01-22 13:35:24 -05:00
Gary Russell
16a195a887 GH-277: Add Polled Consumer
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/277

Resolves #282
2018-01-18 12:59:12 -05:00
Oleg Zhurakousky
bc562e3a77 Polished KafkaBinderTests
polished KafkaBinderTests to account for changes in core related to partitioning - 00748985d6

Resolves #272
2018-01-17 15:52:23 -05:00
Gary Russell
39dd68a8ad SCST-GH-1166: Support Producer-initiated tx
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1166

Previously, transactions were only supported if initiated by a consumer.

Also fix races in `KafkaBinderTests.testResume()` (messages sent before subscribing).
2018-01-17 14:27:32 -05:00
Oleg Zhurakousky
8daaed43a9 upgraded to spring-kafka 2.1.1.BUILD-SNAPSHOT 2018-01-12 09:17:29 -05:00
Oleg Zhurakousky
d13d92131c GH-279 added Log4J-1 support 2018-01-09 15:46:26 -05:00
Oleg Zhurakousky
0b4ccdefce Added test dependency on log4j:log4j since it is still required by Kafka
Resolves #262
2018-01-03 19:22:18 -05:00
Gary Russell
ae269e729b GH-261: Support idle container events
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/261

This facilitates pause/resume.
2018-01-03 19:22:18 -05:00
Soby Chacko
9182adcf56 KStream binder outbound keys changes
The serializer can fall back to the default common one if there is not
a more specific one provided.

Resolves #271
2018-01-03 14:42:18 -05:00
Soby Chacko
0b9e211e27 Kafka Streams binder cleanup
Aligning semantics of native encoding with core spring cloud stream.
With this change, if nativeEncoding is set, any message conversion is
skipped by the framework. It is up to the application to ensure that
the data sent on the outbound is of type byte[], otherwise, it fails.
If the nativeEncoding is false, which is the default, the binder does
the message conversion before sending the data on the outbound.

Corresponding changes in tests

Resolves #267
2017-12-29 16:16:47 -05:00
Gary Russell
a7fe39fef5 Fix Mixed Header Mode Tests
Resolves spring-cloud/spring-cloud-stream#1159

- interceptor was missing

* Fix import order

* Add eclipse import order prefs
2017-12-22 09:48:11 -05:00
Soby Chacko
50b8955dfc Upgrade to Spring Kafka 2.1, Kafka 1.0.0
Resolves #259

- Remove the kafka server dependency from build/binder artifact
- Remove AdminUtilOperation and KafkaAdminUtilOperation
- Rely on AdminClient for provisioning operations
- Update KStream components with the new changes
- Test updates
- Polishing

Add timeout to all the blocking AdminClient calls

Addressing PR review comments

Addressing PR review comments

Addressing PR review

Update SK, SIK to 2.1.0.RELEASE and 3.0.0.RELEASE respectively
Update Kafka Streams class name changes
2017-12-01 11:24:09 -05:00
Oleg Zhurakousky
9a02fa69ac Added Log4J-1 dependency
Added Log4J-1.2.17 dependency since it has been removed by boot yet it is still required by Kafka
2017-11-28 15:47:36 -05:00
Gary Russell
77cbfe2858 GH-251: Configurable inbound adapter converter
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/251

When using native decoding, the message emitted by the binder has no `id` or `timestamp`
headers. This can cause problems if the first SI endpoint uses a JDBC message store,
for example.

Add the ability to configure whether these headers are populated as well as the ability
to inject a custom message converter.
2017-11-15 15:10:06 -05:00
Gary Russell
5db3ede9c4 SCST-GH-1009: Producer properties dynamic bindings
Resolves https://github.com/spring-cloud/spring-cloud-stream/issues/1009

Polishing
2017-11-15 15:04:27 -05:00
Soby Chacko
e59cb569a6 Fix failing integ test on CI 2017-11-15 13:50:40 -05:00
Soby Chacko
b0c4f0cfcd Addressing PR review comments
Resolves #258
2017-11-15 06:54:58 -05:00
Soby Chacko
2f6d5df7bc Addressing PR review comment for gh-58
- Add a check for ignoring dlq producer properties set by the user
  if transaction management is enabled
- Polishing
2017-11-14 14:59:28 -05:00
Soby Chacko
66f194dd93 Configure dlq producer properties
Fixes #58
Resolves #257

- add the ability to configure the dlq producer properties
- new property on KafkaConsumerProperties for dlqProducerProperties
- dlq sender refactoring in Kafka binder
- make dlq type raw so that non byte[] key/payloads can be sent
- add new tests for verifying dlq producer properties
2017-11-14 08:56:41 -05:00
Soby Chacko
4cb49f9ee4 Fix failing integ test
Fix a test that was failing from an unrelated change made around
content type in the message header
2017-11-13 16:48:04 -05:00
Gary Russell
035cc1a005 GH-228: Enhance DLQ messages with failure info
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/228
2017-11-09 16:15:07 -05:00
Soby Chacko
b3f42fe67d Update to next version: 2.0.0.BUILD-SNAPSHOT
Update spring-cloud-build parent to 2.0.0.BUILD-SNAPSHOT
2017-11-09 09:41:56 -05:00
Soby Chacko
a9f40ac084 Update to release version: 2.0.0.M3
spring-cloud-build parent to 2.0.0.M4
2017-11-09 09:08:31 -05:00
Gary Russell
db02abe531 Kafka 0.11 Binder Properties
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/226

- headerPatterns
- transaction.*
2017-11-08 12:07:56 -05:00
Soby Chacko
f1dc14b5c3 Fixing tests from content type changes in core
Fixes #248

Polishing
2017-11-07 14:21:46 -05:00
Gary Russell
50ce8ca2ba Fix doc typo. 2017-10-31 11:49:37 -04:00
Oleg Zhurakousky
6a312592a4 polishing, fixed style errors and import order
Resolves #237
Resolves #236
2017-10-24 15:25:23 -04:00
Gary Russell
43d786f701 GH-236: Backwards Compatibility
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/236

- Restore the mapped headers configuration for embedding headers
- Configure the header mapper based on the header mode
- On the outbound side, set the mapper to `null` unless the header mode is not null and not `headers`
  this suppresses the channel adapter from setting up headers - for compatibility with < 0.11 brokers
- On the inbound side, set the `BinderHeaders.NATIVE_HEADERS_PRESENT` header if native headers found
- Add a configuration capability allowing the user to provide his/her own header mapper

- Add a test case with 3 producers (embedded, native, and no headers) and a consumer to verify it can
  consume all 3

Requires https://github.com/spring-cloud/spring-cloud-stream/pull/1107

Polishing - fix else clause when configuring the header mapper.
2017-10-24 15:24:17 -04:00
Laur Aliste
68811cad28 GH-231: reuse Kafka consumer HealthIndicator
- lazily create Consumer in KafkaBinderHealthIndicator.

- polishing
2017-10-19 16:05:28 -04:00
Soby Chacko
4382dab8f8 Resetting spring cloud stream version to 2.0.0.BUILD-SNAPSHOT 2017-10-19 12:43:01 -04:00
Soby Chacko
20a8158a56 Next update version: 2.0.0.BUILD-SNAPSHOT 2017-10-19 10:02:18 -04:00
Soby Chacko
3ad0d7c465 Update release versions
2.0.0.M2

Closes #235
2017-10-19 09:13:09 -04:00
Gary Russell
5b3974c932 GH-224: Documentation for Kafka partitioning
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/224
2017-10-18 18:39:44 -04:00
Soby Chacko
3c7615f7a3 Test structure improvments
Remove the separate test module introduced in 1.x to test different versions of Kafka.
In 2.0, there is a single Kafka version that needs to be tested.
Move all the tests from the test module to the main binder module.
Remove the confluent schema registry integration test from the binder tests as it
will be ported as a sample application. This test currently does the serialziation/deserialition twice.

Fix #200
2017-10-18 18:24:03 -04:00
Soby Chacko
8ae0157135 Fix Kafka binder metrics docs
Fixing doc updates missed during rebase
2017-10-16 16:51:48 -04:00
Gary Russell
08658ffa6c GH-223: Deserialization with native encoding
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/223

Add test case; stream fix: https://github.com/spring-cloud/spring-cloud-stream/pull/1095

* Some minor test code polishing

# Conflicts:
#	pom.xml
#	spring-cloud-stream-binder-kafka/src/test/java/org/springframework/cloud/stream/binder/kafka/AbstractKafkaBinderTests.java

* `toString()` for `contentType` header since it is `MimeType` now in SCSt-2.0
2017-10-10 15:25:21 -04:00
Soby Chacko
8d797deaf9 Remove module: spring-cloud-stream-binder-kafka-0.10.2-test 2017-10-05 13:22:31 -04:00
Soby Chacko
561b4b7e73 Checkstyle fixes
Making proxy interface public in KStream binder
General cleanup
2017-10-05 13:16:01 -04:00
Gary Russell
93fdd2ef0f Update to SK 2.0.1.BUILD-SNAPSHOT 2017-10-05 11:54:23 -04:00
Artem Bilan
fd48a1d0eb Fix KafkaMessageChannelBinder errors
https://jenkins.spring.io/blue/organizations/jenkins/spring-cloud-stream-binder-kafka-2.0.x-ci/detail/spring-cloud-stream-binder-kafka-2.0.x-ci/15/pipeline
2017-10-05 11:54:23 -04:00
Soby Chacko
a07a0017bb GH-193: Make 2.0 branch up to date
fixes spring-cloud/spring-cloud-stream-binder-kafka#193

Integration missed commits and provide some polishing, improvements and fixes

Remove `resetOffsets` option

Fix #170

Use parent version for spring-cloud-build-tools

Add update version script

Fixes for consumer and producer property propagation

Fix #142 #129 #156 #162

- Remove conditional configuration for Boot 1.4 support
- Filter properties before creating consumer and producer property sets
- Restore `configuration` as Map<String,String> for fixing Boot binding
- Remove 0.9 tests

SCSt-GH-913: Error Handling via ErrorChannel

Relates to spring-cloud/spring-cloud-stream#913

Fixes #162

- configure an ErrorMessageSendingRecoverer to send errors to an error channel, whether or not retry is enabled.

Change Test Binder to use a Fully Wired Integration Context

- logging handler subscribed to errorChannel

Rebase; revert s-k to 1.1.x, Kafka to 0.10.1.1

Remove dependency overrides.

POM structure corrections

- move all intra-project deps to dependency management
- remove redundant overrides of Spring Integration Kafka

Remove reference to deleted module

- `spring-cloud-stream-binder-kafka-test-support` was previously
   removed, but it was still added as an unused dependency to the
   project

Remove duplicate debug statement.

unless you really really want to make sure users see this :)

GH-144: Add Kafka Streams Binder

Fix spring-cloud/spring-cloud-stream-binder-kafka#144

Addressing some PR reviews

Remove java 8 lambada expressions from KStreamBoundElementFactory

Initial - add support for serdes per binding

Fixing checkstyle issues
test source 1.8

Convert integration tests to use Java 7

Internal refactoring

Remove payload serde code in KStreamBoundElementFactory and reuse it from core

Addressing PR comments

cleanup around payload deserialization

Update to latest serialization logic

Extract common properites class for KStream producer/consumer

Addressing PR review comments

* Remove redundant dependencies for KStream Binder

Documentation for KStream binder

* Documentation for KStream binder

Fix #160

* Addressing PR review comments

* Addressing PR review comments

* Addressing PR review comments

Fixes #181

SCSt-GH-916: Configure Producer Error Channel

Requires: https://github.com/spring-cloud/spring-cloud-stream/pull/1039

Publish send failures to the error channel.

Add docs

Revert to Spring Kafka 1.1.6

GH-62: Remove Tuple Kryo Registrar Wrapper

Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/62

No longer needed.

GH-169: Use the Actual Partition Count (Producer)

Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/169

If the configured `partitionCount` is less than the physical partition count on an existing
topic, the binder emits this message:

    The `partitionCount` of the producer for topic partJ.0 is 3, smaller than the actual partition count of 8 of the topic.
    The larger number will be used instead.

However, that is not true; the configured partition count is used.

Override the configured partition count with the actual partition count.

0.11 Binder

Initial Commit

- Transactional Binder

Version Updates

- Headers support

KStreams and 0.11

GH-188: KStream Binder Properties

KStream binder: support class for application level properties

Provide commonly used KStream application properties for convenient access at runtime

Fix #188

Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.

Kstream binder: producer default Serde changes

Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.

Fix #190

KStream binder cleanup,
merge cleanup

re-update kafka version

2.0 related changes

Fix tests
Upgrade Kstream tests

converting anonymous classes to lambda expressions

Renaming Kafka-11 qualifier from test module
Refactoring test class names

cleanup
adding .jdk8 files

Fix KafkaBinderMetrics in 2.0

Fix #199

Addressing PR review comments

Addressing PR review comments
2017-10-05 11:54:23 -04:00
Marius Bogoevici
62b40b852f Set version to 2.0.0.BUILD-SNAPSHOT 2017-10-05 11:19:26 -04:00
Marius Bogoevici
c396c5c756 Release 2.0.0.M1 2017-10-05 11:18:34 -04:00
Marius Bogoevici
b20f4a0e08 Re-add Spring Kafka version 2017-10-05 11:17:55 -04:00
Marius Bogoevici
77f4bc3fb8 Use Spring Boot and dependencies provided by Spring Cloud Build 2017-10-05 11:17:55 -04:00
Marius Bogoevici
2aa8e9eefa Update version to 2.0.0.BUILD-SNAPSHOT
- Update Spring Boot to version 2.0.0.BUILD-SNAPSHOT
- Set Kafka version to 0.10.2
- Remove tests for Kafka 0.9 and 0.10.0
2017-10-05 11:17:55 -04:00
Gary Russell
e3460d6fce Update POMs to 1.3.1.BUILD-SNAPSHOT 2017-09-29 16:25:08 -04:00
Gary Russell
29bb8513c0 Update POMs to 1.3.0.RELEASE 2017-09-29 15:57:53 -04:00
Gary Russell
69227166c7 GH-215: Add timeout to health indicator
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/215

* Shutdown the executor.

* Polishing - PR Comments

* Re-interrupt thread.

* More Polishing
2017-09-29 14:47:45 -04:00
Gary Russell
4ff4507741 GH-206: Close Consumer/Producer in provisioning
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/206

Close the consumer and producer after retrieving the current partition count.

**Cherry pick/back port to 0.11 and 1.2.x, 2.0.x**

* Destroy the Producer Factory
2017-09-28 10:51:15 -04:00
Soby Chacko
f2e1b63460 GH-201: Move metrics doc to the main overview doc
Fixes #201
2017-09-27 12:57:19 -04:00
Soby Chacko
73f1ed9523 Separate each sentence in metrics docs 2017-09-26 20:02:05 -04:00
Soby Chacko
dc7662e17d Add missing documentation for windowing properties
Cleanup test

Fix #196
2017-09-26 16:31:42 -04:00
Gary Russell
b76fff31b8 Back to 1.3.0.BUILD-SNAPSHOT 2017-09-13 11:10:07 -04:00
Gary Russell
1f4f0c3858 Update POMs to 1.3.0.RC1; s-c-build to 1.3.5
Also SIK 2.1.2.RELEASE
2017-09-13 09:39:36 -04:00
Soby Chacko
1aecd02404 Kstream binder: producer default Serde changes
Change the way the default Serde classes are selected for key and value
in producer when only one of those is provided by the user.

Fix #190
2017-09-12 12:40:42 -04:00
Soby Chacko
6485bd2abd GH-188: KStream Binder Properties
KStream binder: support class for application level properties

Provide commonly used KStream application properties for convenient access at runtime

Fix #188

Since windowing operations are common in KStream applications, making the TimeWindows object
avaiable as a first class bean (using auto configuration). This bean is only created if the
relevant properties are provided by the user.
2017-09-12 12:40:00 -04:00
Gary Russell
02913cd177 GH-169: Use the Actual Partition Count (Producer)
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/169

If the configured `partitionCount` is less than the physical partition count on an existing
topic, the binder emits this message:

    The `partitionCount` of the producer for topic partJ.0 is 3, smaller than the actual partition count of 8 of the topic.
    The larger number will be used instead.

However, that is not true; the configured partition count is used.

Override the configured partition count with the actual partition count.
2017-08-22 13:34:21 -04:00
Gary Russell
0865602141 GH-62: Remove Tuple Kryo Registrar Wrapper
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/62

No longer needed.
2017-08-22 13:24:51 -04:00
Gary Russell
790b141799 Fixes #181
SCSt-GH-916: Configure Producer Error Channel

Requires: https://github.com/spring-cloud/spring-cloud-stream/pull/1039

Publish send failures to the error channel.

Add docs

Revert to Spring Kafka 1.1.6
2017-08-22 12:56:16 -04:00
Vinicius Carvalho
60e620e36e Set version to 1.3.0.BUILD-SNAPSHOT 2017-07-31 16:00:00 -04:00
Vinicius Carvalho
09d35cd742 Release 1.3.0.M2 2017-07-31 15:57:46 -04:00
Soby Chacko
91aec59342 Documentation for KStream binder
* Documentation for KStream binder

Fix #160

* Addressing PR review comments

* Addressing PR review comments

* Addressing PR review comments
2017-07-31 11:00:20 -04:00
Soby Chacko
7884d59bdc GH-144: Add Kafka Streams Binder
Fix spring-cloud/spring-cloud-stream-binder-kafka#144

Addressing some PR reviews

Remove java 8 lambada expressions from KStreamBoundElementFactory

Initial - add support for serdes per binding

Fixing checkstyle issues
test source 1.8

Convert integration tests to use Java 7

Internal refactoring

Remove payload serde code in KStreamBoundElementFactory and reuse it from core

Addressing PR comments

cleanup around payload deserialization

Update to latest serialization logic

Extract common properites class for KStream producer/consumer

Addressing PR review comments

* Remove redundant dependencies for KStream Binder
2017-07-27 18:36:03 -04:00
Tom Ellis
9134e101df Remove duplicate debug statement.
unless you really really want to make sure users see this :)
2017-07-24 10:53:54 -04:00
Marius Bogoevici
512fd9830e Update Spring Cloud Stream version to 1.3.0.BUILD-SNAPSHOT 2017-07-19 11:08:54 -04:00
Marius Bogoevici
94fcead9d5 Set version to 1.3.0.BUILD-SNAPSHOT 2017-07-19 10:56:00 -04:00
Marius Bogoevici
756fde75ae Release 1.3.0.M1 2017-07-19 10:48:34 -04:00
Marius Bogoevici
6cc0b08dcd Update script - fix snapshot reporting 2017-07-19 10:47:30 -04:00
Marius Bogoevici
5dccbb9690 Remove reference to deleted module
- `spring-cloud-stream-binder-kafka-test-support` was previously
   removed, but it was still added as an unused dependency to the
   project
2017-07-19 10:39:14 -04:00
Marius Bogoevici
f473b3b42f POM structure corrections
- move all intra-project deps to dependency management
- remove redundant overrides of Spring Integration Kafka
2017-07-19 10:23:57 -04:00
Marius Bogoevici
a146255433 Update Spring Cloud Stream version to 1.3.0.M1 2017-07-19 10:10:06 -04:00
Gary Russell
07e15c141e SCSt-GH-913: Error Handling via ErrorChannel
Relates to spring-cloud/spring-cloud-stream#913

Fixes #162

- configure an ErrorMessageSendingRecoverer to send errors to an error channel, whether or not retry is enabled.

Change Test Binder to use a Fully Wired Integration Context

- logging handler subscribed to errorChannel

Rebase; revert s-k to 1.1.x, Kafka to 0.10.1.1

Remove dependency overrides.
2017-07-18 17:39:47 -04:00
Marius Bogoevici
001cc5901e Fixes for consumer and producer property propagation
Fix #142 #129 #156 #162

- Remove conditional configuration for Boot 1.4 support
- Filter properties before creating consumer and producer property sets
- Restore `configuration` as Map<String,String> for fixing Boot binding
- Remove 0.9 tests
2017-07-18 12:13:26 -04:00
Marius Bogoevici
b7b5961f7d Add update version script 2017-07-17 23:40:12 -04:00
Marius Bogoevici
d19c1b7611 Use parent version for spring-cloud-build-tools 2017-07-17 22:37:42 -04:00
Marius Bogoevici
c627cefee0 Remove resetOffsets option
Fix #170
2017-07-16 23:16:10 -04:00
Soby Chacko
3081a3aa44 Add test module for Kafka 0.10.2.1
Fix #161
2017-06-26 15:22:43 -04:00
Ilayaperumal Gopinathan
143b96f79d Allow consumer group.id to be overridden
- This change will allow consumer's group.id to be overridden from possible options (Spring Boot Kafka properties, binder configuration properties etc.,)

Resolves #149
2017-06-21 17:27:16 -04:00
Marius Bogoevici
a0f386f06f Log faulty topic name during validation
Fix #157
2017-06-18 11:21:01 -04:00
Henryk Konsek
9ad04882c8 Added Kafka binder lag metrics.
Fix #152

Metrics should be divided by group ID of the binding.

TopicInformation should carry optional group of the consumer.

Improved tests coverage.

Added lag metric documentation.
2017-06-07 23:55:53 -04:00
Gary Russell
705213efe8 Add Tests for Propery Override Hierarchy 2017-05-26 15:33:52 -04:00
Doug Saus
7355ada461 Fix ConsumerConfig.AUTO_OFFSET_RESET_CONFIG
Move set of ConsumerConfig.AUTO_OFFSET_RESET_CONFIG based to before setting of custom kafka properties.  This allows users to override this behavior via spring.cloud.stream.kafka.binder.configuration

Updated fix to also allow the spring.cloud.stream.kafka.bindings.<channel>.consumer.startOffset value to override the anonymous-consumer-based value if set

Moved setting of auto.offset.reset based on binder configuration below setting of kafka properties so that it has higher preceence.

Trailing Spaces
2017-05-26 15:15:47 -04:00
Soby Chacko
d4aaf78089 Kafka binder test modules refactoring
Confluent repository needed for tests is moved to spring-cloud-stream-binder-kafka-test-support

Following dependencies are moved to spring-cloud-stream-binder-kafka-test-support

 *spring-cloud-stream-schema
 *kafka-avro-serializer
 *kafka-schema-registry

spring-cloud-stream-binder-kafka-test-support is brought into spring-cloud-stream-binder-kafka only in test scope

Fixes #121

Kafka binder test modules restructuring

New module for kafka - 0.10.1.1 integration tests
0.10.0.1 tests extend from 0.10.1.1 base class for tests
Removing the empty spring-cloud-stream-binder-kafka-test-support module
2017-05-26 15:07:59 +05:30
Soby Chacko
53e38902c9 Update to next major release line: 1.3.0.BUILD-SNAPSHOT
Fix #139
2017-05-17 11:24:06 -04:00
Soby Chacko
a7a0a132ea Next update: 1.2.2.BUILD-SNAPSHOT 2017-05-16 16:09:33 -04:00
Soby Chacko
ee4f3935ec 1.2.1.RELEASE 2017-05-16 16:04:22 -04:00
Ilayaperumal Gopinathan
4e0107b4d2 Minor polishing 2017-05-16 22:28:43 +05:30
Soby Chacko
8f8a1e8709 Simplify the logic in UnexpectedPartitionCountHandling classes
Remove the inner interface and classes for UnexpectedPartitionCountHandling
Directly use auto rebalancing flag to control consumer idling

Resolves #135

Adding tests for auto-rebalancing enabled/disabled and under partitioned scenario

correct the logging message for idle consumers
2017-05-16 22:28:43 +05:30
Marius Bogoevici
c68ea8c570 Documentation polishing 2017-05-16 12:10:44 -04:00
Henryk Konsek
61e7936978 Add key expression support to Kafka producer
Fix #134

- Add `messageKeyExpression` producer property.

Added key expression unit test.

Added messageKeyExpression documentation.
2017-05-16 12:06:49 -04:00
Ilayaperumal Gopinathan
5c70e2df43 Binding properties should override boot properties
- This is especially applicable to `group.id` and `auto.offset.reset` which get set when configuring the consumer properties
 - Spring Boot Kafka properties are updated as binder configuration properties which get the least precedence
 - Update test

Resolves #122
2017-05-16 18:01:59 +05:30
Simon Flandergan
f280edc9ce disable parititon sanity check if auto rebalancing
unexpected partitions handling
variable naming
remove unnecessary partition count calculation
revert to fail fast for producer partition count
added missing final modifier
Polishing
fixing checkstyle issues
2017-05-12 15:53:39 -04:00
Ilayaperumal Gopinathan
2c9afde8c6 Remove log format setting in KafakEnvironmentPostProcessor
Resolves #132
2017-05-11 23:37:38 +05:30
Ilayaperumal Gopinathan
6a8c0cd0c6 Fix KafkaHealthIndicator kafka properties
- Override KafkaHealthIndicator's `bootstrap.servers` property only when it is not set already
 - Add test

Resolves #123
2017-04-18 10:52:15 +05:30
Soby Chacko
ec73f2785d Next version: 1.2.1.BUILD-SNAPSHOT 2017-04-04 16:00:17 -04:00
Soby Chacko
8982f896fd Release 1.2.0.RELEASE 2017-04-04 15:28:56 -04:00
Marius Bogoevici
005ec51d8b Improve isolation of Kafka tests
Fix #120
2017-04-04 12:10:36 -04:00
Marius Bogoevici
47aaf29e3e Fix regression on TopicExistsException
Fix #116
2017-03-31 13:30:13 -04:00
Marius Bogoevici
88d4b8eef5 Replace core docs Git reference with relative link
Fix #114
2017-03-29 13:47:23 -04:00
Ilayaperumal Gopinathan
66eb15a8e2 Make Kafka DLQ topic name configurable
- Make it configurable as a Kafka consumer properties
 - Add test to verify the configuration
 - Update doc

Resolves #108

Test fix to use the same partition count for producer/dlq

Fix KafkaTopicProvisioner in case of configurable dlq topic name

 - Update test
2017-03-24 16:59:13 -04:00
Soby Chacko
466400cdb7 Adding test for raw mode with String payload 2017-03-14 18:37:55 -04:00
Marius Bogoevici
bff0a072dc Set version to 1.2.0.BUILD-SNAPSHOT 2017-03-13 19:52:12 -04:00
Marius Bogoevici
7fad2951f7 Release 1.2.0.RC1 2017-03-13 19:51:32 -04:00
Marius Bogoevici
e7c5f750da Use Spring Cloud Build 1.3.1.RELEASE 2017-03-13 19:43:28 -04:00
Gary Russell
1f28adaf4c GH-21: Docs For Replaying Dead-Lettered Messages
Resolves #21

Use BinderHeaders.PARTITION_OVERRIDE
2017-03-13 17:04:48 -04:00
Barry Commins
91bdee65ec Configure the health indicator ConsumerFactory to use binder properties
Fixes #79

Replaced try...finally in KafkaBinderHealthIndicator with try-with-resources

Changed KafkaBinderHealthIndicatorTest for consistency
2017-03-06 12:57:05 -05:00
Ilayaperumal Gopinathan
6a31e9c94f Support KafkaProperties from Spring Boot autoconfiguration
- If KafkaProperties is available from KafkaAutoConfiguration, retrieve the kafka properties and set as `KafkaBinderConfigurationProperties`' configuration property.
This way, these properties are available at the top level and per-binding properties could still override when setting producer/consumer properties during binding operation
 - Add tests to verify the scenarios

Resolves #73

Fix deprecation warning

Remove unused constant

Polishing
2017-03-06 11:32:10 -05:00
Marius Bogoevici
b541fca68f Fix 'spring-cloud-stream-binder-kafka-test-support'
Set scope compile on 'spring-kafka-test' so that the
dependency is transitive.
2017-02-21 19:11:31 -05:00
Marius Bogoevici
9d23e6a8fe Move JLine exclusion to the parent 2017-02-21 17:05:29 -05:00
Marius Bogoevici
89249b233f Set dependencies to snapshots 2017-02-21 16:15:40 -05:00
Marius Bogoevici
1998d5e7e8 Set version to 1.2.0.BUILD-SNAPSHOT 2017-02-21 16:14:42 -05:00
Marius Bogoevici
e1906711a8 Release 1.2.0.M2 2017-02-21 16:13:22 -05:00
Marius Bogoevici
78213c98e8 Update dependencies to milestone version 2017-02-21 16:12:17 -05:00
Marius Bogoevici
c10206d41b Remove JLine as dependency
Fixes #104
2017-02-21 16:01:14 -05:00
Ilayaperumal Gopinathan
73eda0ddd0 Add doc for startOffset usage
- Clarify based on the consumerGroup property

Resolves #48
2017-02-21 12:56:14 -05:00
Marius Bogoevici
5b51d7cce3 Reinstate spring-cloud-stream-binder-kafka-test-support
Fix #102

We've removed it in favour of using `spring-kafka-test`
directly but that is somewhat inconvenient to the end
user. Also, it's still part of the release train BOM
so it's an oversight on our end.
2017-02-20 23:18:40 +05:30
Soby Chacko
2530229fb5 Consolidate Rawmode tests as part of KafkaBinderTests
Enable all version specific Kafka tests to run raw mode tests as well
2017-02-18 11:03:25 -05:00
Gary Russell
70cba0ae03 Fix KafkaTopicProvisioner Inner Classes
- make destination inner classes `static` since they have no dependence on the outer class
- make ctors package-protected - private ctors for inner classes cause the compiler to
  generate a synthetic ctor with an additional synthetic parameter
2017-02-17 21:29:45 -05:00
Ilayaperumal Gopinathan
3eb9493cba Revert "Support Spring Boot KafkaProperties"
This reverts commit 89b75b4734.
2017-02-17 11:58:34 +05:30
Ilayaperumal Gopinathan
89b75b4734 Support Spring Boot KafkaProperties
- If KafkaProperties for the KafkaAutoConfiguration is set, then use those properties for the KafkaMessageChannelBinder
 - For the KafkaProperties that have explicit default, override with the KafkaMessageChannelBinder defaults when the properties are not set by any of the property sources
 - Support the existing Kafka Producer/Consumer properties if they are set
 - Add tests

Resolves #73

Address review comments

 - Add javadoc for deprecated fields
 - Add doc

polishing
2017-02-16 10:48:20 -05:00
Soby Chacko
dc0bc18f37 Provisioning SPI Related Changes
- Separate Kafka topic provisioning from the binder using the SPI provided in core
 - Refactor the common entities needed for Kafka into a new core module
 - Refactor binder code to reflect the SPI changes
 - Make DLQ provisioning match with partition properties of the topic

Fixes #86
Fixes #50

Addressing PR review comments

cleanup - addressing PR comments

Using ProvisioningException

Simpler toString() in provisioner

Fix accidental removal of set metadataOperations in binder/tests

cleanup

Moving metadata operations completely to provisioner
Related refactoring

Removing ProvisioningProvider cast

Fix Usage of BinderHeaders.PARTITION_HEADER
2017-02-14 13:51:54 -05:00
Marius Bogoevici
8fafc2605e Renamed Kafka test artifacts 2017-02-08 18:06:12 -05:00
Soby Chacko
83eca9b734 Setting default Kafka baseline to 0.10.1.1
- Starting 1.2, this is going to be the default Kafka version
 - Swap AdminUtilsOperation implementation for 0.9 and 0.10 (Former using reflection and straight API call for latter)
 - In addition to Kafka, Spring-Kafka and SI Kafka dependencies are updated as well.

Separate test artifact for 0.9.0.1.

 - Pull out 0.9 based tests from the default binder module to this new module.

Separate test artifact for 0.10.0.1.

Fixes #88
Fixes #81
2017-02-06 11:34:11 -05:00
Gary Russell
b2d579b5b6 GH-85: Remove Duplicate Dep. From Pom
Fixes #85
2017-02-03 10:13:22 -05:00
Soby Chacko
c389fa3fa4 Ignore 'TopicExistsException' on creation
Fixes a race condtion when topics are created in a stream
and a TopicExistsException is thrown.

Fixes #83

Explicitly checking for TopicExistsException
Cleanup
2017-01-30 18:39:49 -05:00
Ilayaperumal Gopinathan
50e0a8b30e Set log config for Kafka Binder
- set ZKClient logging level to `ERROR`
  - set the config implementations of `AbstractConfig` to `ERROR` logging level

This resolves #56
This resolves #52

Remove producer/consumer logging config
- Update copyright year
2017-01-23 13:38:52 -05:00
Marius Bogoevici
7daa0f1b72 Remove package-info.java 2017-01-11 22:07:13 -05:00
Marius Bogoevici
f3e38961c5 Set version to 1.2.0.BUILD-SNAPSHOT 2017-01-11 13:47:07 -05:00
Marius Bogoevici
7b06ff9b17 Release 1.2.0.M1
Signed-off-by: Marius Bogoevici <mbogoevici@pivotal.io>
2017-01-11 13:45:40 -05:00
Marius Bogoevici
ddb520ac02 Fix parent for 0.10 tests 2017-01-11 13:45:01 -05:00
Marius Bogoevici
cb01dda042 Update mvnw 2017-01-11 12:02:03 -05:00
Marius Bogoevici
525aea8a98 Use Spring Cloud Build 1.3.1.BUILD-SNAPSHOT 2017-01-11 10:51:15 -05:00
Marius Bogoevici
01a715a9a9 Observe bufferSize setting
Fixes #77

Signed-off-by: Marius Bogoevici <mbogoevici@pivotal.io>
2017-01-04 22:37:47 -05:00
Marius Bogoevici
21ab8090b8 Remove explicit version references for Stream and Kafka binder artifacts
Signed-off-by: Marius Bogoevici <mbogoevici@pivotal.io>
2017-01-04 22:32:58 -05:00
Marius Bogoevici
e0bba57a51 Update master to Dalston/Stream 1.2 2016-12-18 11:55:30 -05:00
Marius Bogoevici
1668f0c694 Update to using 'lifecycle' instead of 'endpoint' for tests 2016-12-16 18:13:33 -05:00
Ilayaperumal Gopinathan
36c90aa8c0 Ability to override deserializer in Kafka consumer
- When binding the consumer, the kafka consumer should not be set to use `ByteArrayDeserializer` for both key/value deserializer. Instead, they need to be used as the default values. Any extended consumer properties for key/value deserializer should override this default deserializer.

 - Add test

This resolves #55

Add tests for custom/native serialization

 - Test using built-in serialization without using kafka native serialization (ie. both the serializer/de-serializer are set to use ByteArraySe/Deserializer)
 - Test using custom serializer by explicitly setting value.deserializer for both Kafka producer/consumer properties
 - Test avro message conversion and Kafka Avro Serializer using Confluent Schema Registry
     - Given pre-released registry versions have some bugs that blocks the testing and `3.0.1` requires Kafka 0.10, this test is only in Kafka .10 binder

Update Schema based custom serializer/de-serializer tests

Fix import

Handle unbind during tests appropriately
2016-11-15 18:20:34 -05:00
Artem Bilan
943a7ae148 Upgrade to Spring Kafka 1.0.5
Also fix `mvnw.cmd` to be really for Windows
2016-11-08 11:34:34 -05:00
Soby Chacko
a1633ab241 Adding latency after unbind in testResume to ensure offsets are updated in broker 2016-10-24 21:34:56 -04:00
Soby Chacko
68fd3fb6d7 enable auto rebalance in testResume 2016-10-24 11:05:54 -04:00
Soby Chacko
df6d472e99 Enable auto rebalancing in testCompression
Reduce logging levels in tests
2016-10-21 15:21:09 -04:00
Soby Chacko
23ae2b3088 setting producer to publish synchronously in tests 2016-10-18 22:50:36 -04:00
Soby Chacko
2e595f34f2 Increase logging in tests 2016-10-18 22:48:55 -04:00
Soby Chacko
acb1c60216 test cleanup 2016-10-18 19:45:44 -04:00
Marius Bogoevici
275f5cbb8e Use Spring Cloud Build as parent
Fixes #60
2016-10-17 20:10:06 +05:30
Soby Chacko
79643586dd Ensure that topic is created where broker automatically creates it 2016-10-14 17:31:18 -04:00
Ilayaperumal Gopinathan
b292f81d46 Set producer listener obtained from the context
- When creating the KafkaMessageChannelBinder set the producer listener that is being autowired in the configuration
 - Add test

This resolves #49
2016-10-04 19:03:40 +05:30
Marius Bogoevici
8ad98c09b9 Revert to Spring Cloud Stream 1.1.1.BUILD-SNAPSHOT 2016-09-22 15:29:58 -04:00
bamboo
624b039da5 [artifactory-release] Next development version 2016-09-22 01:26:20 +00:00
bamboo
b08888862a [artifactory-release] Release version 1.1.0.RELEASE 2016-09-22 01:26:20 +00:00
Marius Bogoevici
e13e1b1b1b Fix Kafka tests
As ConsumerProperties are initialized with different defaults
tests must configure them with the same logic as ChannelBindingServiceProperties.
2016-09-21 20:32:36 -04:00
Marius Bogoevici
513c27beb8 Update to Spring Cloud Stream 1.1.0.RELEASE 2016-09-21 19:01:19 -04:00
Soby Chacko
5f1e6caeab Manual acking tests
Adding tests to verify manual acking when auto commit
offsets are disabled/enabled.
Docs changes.
2016-09-21 18:52:27 -04:00
Soby Chacko
ed4416e7c7 Upgrading spring-kafka and spring-integration-kafka dependencies, docs changes 2016-09-21 18:10:46 -04:00
Nadelson, Mark
546fe69346 set Kafka ackMode to MANUAL if autoCommitOffset = false
polishing
2016-09-21 18:04:02 -04:00
Marius Bogoevici
30c868e51b Configure JAAS settings for Kafka via Spring Boot
Fixes #38

Adds support for programmatic configurtion of JAAS
properties as an alternative to using a JAAS file.

Add a placeholder JAAS configuration file which is ignored
by the configuration but lets Kafka 0.9 clients connect
2016-09-20 11:05:21 -04:00
Marius Bogoevici
6506018a2c Update README.adoc 2016-09-20 09:15:17 -04:00
Marius Bogoevici
09ec23b4bc Minor polishing and refactoring 2016-09-14 15:24:38 -04:00
Soby Chacko
8f4fbd0da6 Making AdminUtils optional
polishing, docs
Addressing PR review comments
Polishing: Refactoring topic creation methods
Addressing PR comments: Use ClassUtils, Kafka AppInfoParser and polishing
2016-09-13 17:34:57 -04:00
bamboo
323206130a [artifactory-release] Next development version 2016-09-07 23:38:15 +00:00
bamboo
b944b594b5 [artifactory-release] Release version 1.1.0.RC2 2016-09-07 23:38:15 +00:00
Marius Bogoevici
4fa567fec7 Fix documentation issue 2016-09-07 19:14:31 -04:00
bamboo
16d6584774 [artifactory-release] Next development version 2016-09-07 22:38:01 +00:00
bamboo
ea514bd72d [artifactory-release] Release version 1.1.0.RC1 2016-09-07 22:38:01 +00:00
Marius Bogoevici
79feac15d0 Upgrade to Spring Cloud Stream 1.1.0.RC1 2016-09-07 18:25:49 -04:00
Marius Bogoevici
3a27a5ec75 Remove redundant spring-kafka declaration 2016-09-07 18:25:22 -04:00
Marius Bogoevici
7308bd4991 Polishing
Removed optional flag
2016-09-07 17:53:49 -04:00
Soby Chacko
866eaf4a25 Add support for drop-in support for Kafka 0.10
Reflectively detect AdminUtils from Kafka 0.9 and 0.10
Introduce Kafka 10 conditionals
2016-09-07 17:52:53 -04:00
bamboo
b37e7b37d2 [artifactory-release] Next development version 2016-08-26 03:30:26 +00:00
223 changed files with 29257 additions and 7371 deletions

2
.gitignore vendored
View File

@@ -18,7 +18,9 @@ _site/
*.ipr
*.iws
.idea/*
*/.idea
.factorypath
dump.rdb
.apt_generated
artifacts
.sts4-cache

0
.jdk8 Normal file
View File

117
.mvn/wrapper/MavenWrapperDownloader.java vendored Normal file
View File

@@ -0,0 +1,117 @@
/*
* Copyright 2007-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
import java.util.Properties;
public class MavenWrapperDownloader {
private static final String WRAPPER_VERSION = "0.5.6";
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/"
+ WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
* use instead of the default one.
*/
private static final String MAVEN_WRAPPER_PROPERTIES_PATH =
".mvn/wrapper/maven-wrapper.properties";
/**
* Path where the maven-wrapper.jar will be saved to.
*/
private static final String MAVEN_WRAPPER_JAR_PATH =
".mvn/wrapper/maven-wrapper.jar";
/**
* Name of the property which should be used to override the default download url for the wrapper.
*/
private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl";
public static void main(String args[]) {
System.out.println("- Downloader started");
File baseDirectory = new File(args[0]);
System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath());
// If the maven-wrapper.properties exists, read it and check if it contains a custom
// wrapperUrl parameter.
File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH);
String url = DEFAULT_DOWNLOAD_URL;
if(mavenWrapperPropertyFile.exists()) {
FileInputStream mavenWrapperPropertyFileInputStream = null;
try {
mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile);
Properties mavenWrapperProperties = new Properties();
mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream);
url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url);
} catch (IOException e) {
System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'");
} finally {
try {
if(mavenWrapperPropertyFileInputStream != null) {
mavenWrapperPropertyFileInputStream.close();
}
} catch (IOException e) {
// Ignore ...
}
}
}
System.out.println("- Downloading from: " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if(!outputFile.getParentFile().exists()) {
if(!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
try {
downloadFileFromURL(url, outputFile);
System.out.println("Done");
System.exit(0);
} catch (Throwable e) {
System.out.println("- Error downloading");
e.printStackTrace();
System.exit(1);
}
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) {
String username = System.getenv("MVNW_USERNAME");
char[] password = System.getenv("MVNW_PASSWORD").toCharArray();
Authenticator.setDefault(new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, password);
}
});
}
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
}
}

Binary file not shown.

View File

@@ -1 +1,2 @@
distributionUrl=https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.3.3/apache-maven-3.3.3-bin.zip
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip
wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar

View File

@@ -21,7 +21,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -29,7 +29,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -37,7 +37,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -47,7 +47,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -55,7 +55,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>

View File

@@ -1,6 +1,6 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -192,7 +192,7 @@
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,3 +1,857 @@
# spring-cloud-stream-binder-kafka
////
DO NOT EDIT THIS FILE. IT WAS GENERATED.
Manual changes to this file will be lost when it is generated again.
Edit the files in the src/main/asciidoc/ directory instead.
////
Spring Cloud Stream Binder implementation for Kafka
:jdkversion: 1.8
:github-tag: master
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: https://raw.githubusercontent.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
image::https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka.svg?style=svg["CircleCI", link="https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka/branch/{github-tag}/graph/badge.svg["codecov", link="https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.svg[Gitter, link="https://gitter.im/spring-cloud/spring-cloud-stream-binder-kafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge"]
// ======================================================================================
//= Overview
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs.
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
=== Overview
The following image shows a simplified diagram of how the Apache Kafka binder operates:
.Kafka Binder
image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"]
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
In addition to support known Kafka producer properties, unknown producer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
+
Default: `false`.
autoCommitOffset::
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it always auto-commits (if auto-commit is enabled).
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
Also see `resetOffsets` (earlier in this list).
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
----
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
=== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
[[kafka-tombstones]]
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
====
[source, java]
----
@StreamListener(Sink.INPUT)
public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
@Payload(required = false) Customer customer) {
// customer is null if a tombstone record
...
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.
= Appendices
[appendix]
[[building]]
== Building
:jdkversion: 1.7
=== Basic Compile and Test
To build the source you will need to install JDK {jdkversion}.
The build uses the Maven wrapper so you don't have to install a specific
version of Maven. To enable the tests, you should have Kafka server 0.9 or above running
before building. See below for more information on running the servers.
The main build command is
----
$ ./mvnw clean install
----
You can also add '-DskipTests' if you like, to avoid running the tests.
NOTE: You can also install Maven (>=3.3.3) yourself and run the `mvn` command
in place of `./mvnw` in the examples below. If you do that you also
might need to add `-P spring` if your local Maven settings do not
contain repository declarations for spring pre-release artifacts.
NOTE: Be aware that you might need to increase the amount of memory
available to Maven by setting a `MAVEN_OPTS` environment variable with
a value like `-Xmx512m -XX:MaxPermSize=128m`. We try to cover this in
the `.mvn` configuration, so if you find you have to do it to make a
build succeed, please raise a ticket to get the settings added to
source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".
Unfortunately m2e does not yet support Maven 3.3, so once the projects
are imported into Eclipse you will also need to tell m2eclipse to use
the `.settings.xml` file for the projects. If you do not do this you
may see many different errors related to the POMs in the
projects. Open your Eclipse preferences, expand the Maven
preferences, and select User Settings. In the User Settings field
click Browse and navigate to the Spring Cloud project you imported
selecting the `.settings.xml` file in that project. Click Apply and
then OK to save the preference changes.
NOTE: Alternatively you can copy the repository settings from https://github.com/spring-cloud/spring-cloud-build/blob/master/.settings.xml[`.settings.xml`] into your own `~/.m2/settings.xml`.
==== Importing into eclipse without m2eclipse
If you prefer not to use m2eclipse you can generate eclipse project metadata using the
following command:
[indent=0]
----
$ ./mvnw eclipse:eclipse
----
The generated eclipse projects can be imported by selecting `import existing projects`
from the `file` menu.
[[contributing]
== Contributing
Spring Cloud is released under the non-restrictive Apache 2.0 license,
and follows a very standard Github development process, using Github
tracker for issues and merging pull requests into master. If you want
to contribute even something trivial please do not hesitate, but
follow the guidelines below.
=== Sign the Contributor License Agreement
Before we accept a non-trivial patch or pull request we will need you to sign the
https://support.springsource.com/spring_committer_signup[contributor's agreement].
Signing the contributor's agreement does not grant anyone commit rights to the main
repository, but it does mean that we can accept your contributions, and you will get an
author credit if we do. Active contributors might be asked to join the core team, and
given the ability to merge pull requests.
=== Code Conventions and Housekeeping
None of these is essential for a pull request, but they will all help. They can also be
added after the original pull request but before a merge.
* Use the Spring Framework code format conventions. If you use Eclipse
you can import formatter settings using the
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
for.
* Add the ASF license header comment to all new `.java` files (copy from existing files
in the project)
* Add yourself as an `@author` to the .java files that you modify substantially (more
than cosmetic changes).
* Add some Javadocs and, if you change the namespace, some XSD doc elements.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).
// ======================================================================================

0
docs/.jdk8 Normal file
View File

67
docs/pom.xml Normal file
View File

@@ -0,0 +1,67 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.0.M1</version>
</parent>
<packaging>pom</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs</description>
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<maven.plugin.plugin.version>3.4</maven.plugin.plugin.version>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>docs</id>
<build>
<plugins>
<plugin>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${asciidoctor-maven-plugin.version}</version>
<configuration>
<sourceDirectory>${project.build.directory}/refdocs/</sourceDirectory>
<attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
</attributes>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@@ -0,0 +1,22 @@
:jdkversion: 1.8
:github-tag: master
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: https://raw.githubusercontent.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
image::https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka.svg?style=svg["CircleCI", link="https://circleci.com/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka/branch/{github-tag}/graph/badge.svg["codecov", link="https://codecov.io/gh/spring-cloud/spring-cloud-stream-binder-kafka"]
image::https://badges.gitter.im/spring-cloud/spring-cloud-stream-binder-kafka.svg[Gitter, link="https://gitter.im/spring-cloud/spring-cloud-stream-binder-kafka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge"]
// ======================================================================================
//= Overview
include::overview.adoc[]
= Appendices
[appendix]
include::building.adoc[]
include::contributing.adoc[]
// ======================================================================================

View File

@@ -34,7 +34,7 @@ source control.
The projects that require middleware generally include a
`docker-compose.yml`, so consider using
http://compose.docker.io/[Docker Compose] to run the middeware servers
https://compose.docker.io/[Docker Compose] to run the middeware servers
in Docker containers.
=== Documentation
@@ -43,13 +43,13 @@ There is a "full" profile that will generate documentation.
=== Working with the code
If you don't have an IDE preference we would recommend that you use
http://www.springsource.com/developer/sts[Spring Tools Suite] or
http://eclipse.org[Eclipse] when working with the code. We use the
http://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
https://www.springsource.com/developer/sts[Spring Tools Suite] or
https://eclipse.org[Eclipse] when working with the code. We use the
https://eclipse.org/m2e/[m2eclipe] eclipse plugin for maven support. Other IDEs and tools
should also work without issue.
==== Importing into eclipse with m2eclipse
We recommend the http://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
We recommend the https://eclipse.org/m2e/[m2eclipe] eclipse plugin when working with
eclipse. If you don't already have m2eclipse installed it is available from the "eclipse
marketplace".

View File

@@ -24,7 +24,7 @@ added after the original pull request but before a merge.
`eclipse-code-formatter.xml` file from the
https://github.com/spring-cloud/build/tree/master/eclipse-coding-conventions.xml[Spring
Cloud Build] project. If using IntelliJ, you can use the
http://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
https://plugins.jetbrains.com/plugin/6546[Eclipse Code Formatter
Plugin] to import the same file.
* Make sure all new `.java` files to have a simple Javadoc class comment with at least an
`@author` tag identifying you, and preferably at least a paragraph on what the class is
@@ -37,6 +37,6 @@ added after the original pull request but before a merge.
* A few unit tests would help a lot as well -- someone has to do it.
* If no-one else is using your branch, please rebase it against the current master (or
other target branch in the main project).
* When writing a commit message please follow http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
* When writing a commit message please follow https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html[these conventions],
if you are fixing an existing issue please add `Fixes gh-XXXX` at the end of the commit
message (where XXXX is the issue number).

View File

@@ -0,0 +1,132 @@
[[kafka-dlq-processing]]
=== Dead-Letter Topic Processing
[[dlq-partition-selection]]
==== Dead-Letter Topic Partition Selection
By default, records are published to the Dead-Letter topic using the same partition as the original record.
This means the Dead-Letter topic must have at least as many partitions as the original record.
To change this behavior, add a `DlqPartitionFunction` implementation as a `@Bean` to the application context.
Only one such bean can be present.
The function is provided with the consumer group, the failed `ConsumerRecord` and the exception.
For example, if you always want to route to partition 0, you might use:
====
[source, java]
----
@Bean
public DlqPartitionFunction partitionFunction() {
return (group, record, ex) -> 0;
}
----
====
NOTE: If you set a consumer binding's `dlqPartitions` property to 1 (and the binder's `minPartitionCount` is equal to `1`), there is no need to supply a `DlqPartitionFunction`; the framework will always use partition 0.
If you set a consumer binding's `dlqPartitions` property to a value greater than `1` (or the binder's `minPartitionCount` is greater than `1`), you **must** provide a `DlqPartitionFunction` bean, even if the partition count is the same as the original topic's.
[[dlq-handling]]
==== Handling Records in a Dead-Letter Topic
Because the framework cannot anticipate how users would want to dispose of dead-lettered messages, it does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a "`parking lot`" topic after three attempts.
The application is another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
The examples assume the original destination is `so8400out` and the consumer group is `so8400`.
There are a couple of strategies to consider:
* Consider running the rerouting only when the main application is not running.
Otherwise, the retries for transient errors are used up very quickly.
* Alternatively, use a two-stage approach: Use this application to route to a third topic and another to route from there back to the main topic.
The following code listings show the sample application:
.application.properties
[source]
----
spring.cloud.stream.bindings.input.group=so8400replay
spring.cloud.stream.bindings.input.destination=error.so8400out.so8400
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot
spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest
spring.cloud.stream.kafka.binder.headers=x-retries
----
.Application
[source, java]
----
@SpringBootApplication
@EnableBinding(TwoOutputProcessor.class)
public class ReRouteDlqKApplication implements CommandLineRunner {
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) {
SpringApplication.run(ReRouteDlqKApplication.class, args).close();
}
private final AtomicInteger processed = new AtomicInteger();
@Autowired
private MessageChannel parkingLot;
@StreamListener(Processor.INPUT)
@SendTo(Processor.OUTPUT)
public Message<?> reRoute(Message<?> failed) {
processed.incrementAndGet();
Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);
if (retries == null) {
System.out.println("First retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else if (retries.intValue() < 3) {
System.out.println("Another retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else {
System.out.println("Retries exhausted for " + failed);
parkingLot.send(MessageBuilder.fromMessage(failed)
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build());
}
return null;
}
@Override
public void run(String... args) throws Exception {
while (true) {
int count = this.processed.get();
Thread.sleep(5000);
if (count == this.processed.get()) {
System.out.println("Idle, terminating");
return;
}
}
}
public interface TwoOutputProcessor extends Processor {
@Output("parkingLot")
MessageChannel parkingLot();
}
}
----

330
docs/src/main/asciidoc/ghpages.sh Executable file
View File

@@ -0,0 +1,330 @@
#!/bin/bash -x
set -e
# Set default props like MAVEN_PATH, ROOT_FOLDER etc.
function set_default_props() {
# The script should be executed from the root folder
ROOT_FOLDER=`pwd`
echo "Current folder is ${ROOT_FOLDER}"
if [[ ! -e "${ROOT_FOLDER}/.git" ]]; then
echo "You're not in the root folder of the project!"
exit 1
fi
# Prop that will let commit the changes
COMMIT_CHANGES="no"
MAVEN_PATH=${MAVEN_PATH:-}
echo "Path to Maven is [${MAVEN_PATH}]"
REPO_NAME=${PWD##*/}
echo "Repo name is [${REPO_NAME}]"
SPRING_CLOUD_STATIC_REPO=${SPRING_CLOUD_STATIC_REPO:-git@github.com:spring-cloud/spring-cloud-static.git}
echo "Spring Cloud Static repo is [${SPRING_CLOUD_STATIC_REPO}"
}
# Check if gh-pages exists and docs have been built
function check_if_anything_to_sync() {
git remote set-url --push origin `git config remote.origin.url | sed -e 's/^git:/https:/'`
if ! (git remote set-branches --add origin gh-pages && git fetch -q); then
echo "No gh-pages, so not syncing"
exit 0
fi
if ! [ -d docs/target/generated-docs ] && ! [ "${BUILD}" == "yes" ]; then
echo "No gh-pages sources in docs/target/generated-docs, so not syncing"
exit 0
fi
}
function retrieve_current_branch() {
# Code getting the name of the current branch. For master we want to publish as we did until now
# https://stackoverflow.com/questions/1593051/how-to-programmatically-determine-the-current-checked-out-git-branch
# If there is a branch already passed will reuse it - otherwise will try to find it
CURRENT_BRANCH=${BRANCH}
if [[ -z "${CURRENT_BRANCH}" ]] ; then
CURRENT_BRANCH=$(git symbolic-ref -q HEAD)
CURRENT_BRANCH=${CURRENT_BRANCH##refs/heads/}
CURRENT_BRANCH=${CURRENT_BRANCH:-HEAD}
fi
echo "Current branch is [${CURRENT_BRANCH}]"
git checkout ${CURRENT_BRANCH} || echo "Failed to check the branch... continuing with the script"
}
# Switches to the provided value of the release version. We always prefix it with `v`
function switch_to_tag() {
git checkout v${VERSION}
}
# Build the docs if switch is on
function build_docs_if_applicable() {
if [[ "${BUILD}" == "yes" ]] ; then
./mvnw clean install -P docs -pl docs -DskipTests
fi
}
# Get the name of the `docs.main` property
# Get whitelisted branches - assumes that a `docs` module is available under `docs` profile
function retrieve_doc_properties() {
MAIN_ADOC_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
-Dexec.args='${docs.main}' \
--non-recursive \
org.codehaus.mojo:exec-maven-plugin:1.3.1:exec)
echo "Extracted 'main.adoc' from Maven build [${MAIN_ADOC_VALUE}]"
WHITELIST_PROPERTY=${WHITELIST_PROPERTY:-"docs.whitelisted.branches"}
WHITELISTED_BRANCHES_VALUE=$("${MAVEN_PATH}"mvn -q \
-Dexec.executable="echo" \
-Dexec.args="\${${WHITELIST_PROPERTY}}" \
org.codehaus.mojo:exec-maven-plugin:1.3.1:exec \
-P docs \
-pl docs)
echo "Extracted '${WHITELIST_PROPERTY}' from Maven build [${WHITELISTED_BRANCHES_VALUE}]"
}
# Stash any outstanding changes
function stash_changes() {
git diff-index --quiet HEAD && dirty=$? || (echo "Failed to check if the current repo is dirty. Assuming that it is." && dirty="1")
if [ "$dirty" != "0" ]; then git stash; fi
}
# Switch to gh-pages branch to sync it with current branch
function add_docs_from_target() {
local DESTINATION_REPO_FOLDER
if [[ -z "${DESTINATION}" && -z "${CLONE}" ]] ; then
DESTINATION_REPO_FOLDER=${ROOT_FOLDER}
elif [[ "${CLONE}" == "yes" ]]; then
mkdir -p ${ROOT_FOLDER}/target
local clonedStatic=${ROOT_FOLDER}/target/spring-cloud-static
if [[ ! -e "${clonedStatic}/.git" ]]; then
echo "Cloning Spring Cloud Static to target"
git clone ${SPRING_CLOUD_STATIC_REPO} ${clonedStatic} && git checkout gh-pages
else
echo "Spring Cloud Static already cloned - will pull changes"
cd ${clonedStatic} && git checkout gh-pages && git pull origin gh-pages
fi
DESTINATION_REPO_FOLDER=${clonedStatic}/${REPO_NAME}
mkdir -p ${DESTINATION_REPO_FOLDER}
else
if [[ ! -e "${DESTINATION}/.git" ]]; then
echo "[${DESTINATION}] is not a git repository"
exit 1
fi
DESTINATION_REPO_FOLDER=${DESTINATION}/${REPO_NAME}
mkdir -p ${DESTINATION_REPO_FOLDER}
echo "Destination was provided [${DESTINATION}]"
fi
cd ${DESTINATION_REPO_FOLDER}
git checkout gh-pages
git pull origin gh-pages
# Add git branches
###################################################################
if [[ -z "${VERSION}" ]] ; then
copy_docs_for_current_version
else
copy_docs_for_provided_version
fi
commit_changes_if_applicable
}
# Copies the docs by using the retrieved properties from Maven build
function copy_docs_for_current_version() {
if [[ "${CURRENT_BRANCH}" == "master" ]] ; then
echo -e "Current branch is master - will copy the current docs only to the root folder"
for f in docs/target/generated-docs/*; do
file=${f#docs/target/generated-docs/*}
if ! git ls-files -i -o --exclude-standard --directory | grep -q ^$file$; then
# Not ignored...
cp -rf $f ${ROOT_FOLDER}/
git add -A ${ROOT_FOLDER}/$file
fi
done
COMMIT_CHANGES="yes"
else
echo -e "Current branch is [${CURRENT_BRANCH}]"
# https://stackoverflow.com/questions/29300806/a-bash-script-to-check-if-a-string-is-present-in-a-comma-separated-list-of-strin
if [[ ",${WHITELISTED_BRANCHES_VALUE}," = *",${CURRENT_BRANCH},"* ]] ; then
mkdir -p ${ROOT_FOLDER}/${CURRENT_BRANCH}
echo -e "Branch [${CURRENT_BRANCH}] is whitelisted! Will copy the current docs to the [${CURRENT_BRANCH}] folder"
for f in docs/target/generated-docs/*; do
file=${f#docs/target/generated-docs/*}
if ! git ls-files -i -o --exclude-standard --directory | grep -q ^$file$; then
# Not ignored...
# We want users to access 2.0.0.BUILD-SNAPSHOT/ instead of 1.0.0.RELEASE/spring-cloud.sleuth.html
if [[ "${file}" == "${MAIN_ADOC_VALUE}.html" ]] ; then
# We don't want to copy the spring-cloud-sleuth.html
# we want it to be converted to index.html
cp -rf $f ${ROOT_FOLDER}/${CURRENT_BRANCH}/index.html
git add -A ${ROOT_FOLDER}/${CURRENT_BRANCH}/index.html
else
cp -rf $f ${ROOT_FOLDER}/${CURRENT_BRANCH}
git add -A ${ROOT_FOLDER}/${CURRENT_BRANCH}/$file
fi
fi
done
COMMIT_CHANGES="yes"
else
echo -e "Branch [${CURRENT_BRANCH}] is not on the white list! Check out the Maven [${WHITELIST_PROPERTY}] property in
[docs] module available under [docs] profile. Won't commit any changes to gh-pages for this branch."
fi
fi
}
# Copies the docs by using the explicitly provided version
function copy_docs_for_provided_version() {
local FOLDER=${DESTINATION_REPO_FOLDER}/${VERSION}
mkdir -p ${FOLDER}
echo -e "Current tag is [v${VERSION}] Will copy the current docs to the [${FOLDER}] folder"
for f in ${ROOT_FOLDER}/docs/target/generated-docs/*; do
file=${f#${ROOT_FOLDER}/docs/target/generated-docs/*}
copy_docs_for_branch ${file} ${FOLDER}
done
COMMIT_CHANGES="yes"
CURRENT_BRANCH="v${VERSION}"
}
# Copies the docs from target to the provided destination
# Params:
# $1 - file from target
# $2 - destination to which copy the files
function copy_docs_for_branch() {
local file=$1
local destination=$2
if ! git ls-files -i -o --exclude-standard --directory | grep -q ^${file}$; then
# Not ignored...
# We want users to access 2.0.0.BUILD-SNAPSHOT/ instead of 1.0.0.RELEASE/spring-cloud.sleuth.html
if [[ ("${file}" == "${MAIN_ADOC_VALUE}.html") || ("${file}" == "${REPO_NAME}.html") ]] ; then
# We don't want to copy the spring-cloud-sleuth.html
# we want it to be converted to index.html
cp -rf $f ${destination}/index.html
git add -A ${destination}/index.html
else
cp -rf $f ${destination}
git add -A ${destination}/$file
fi
fi
}
function commit_changes_if_applicable() {
if [[ "${COMMIT_CHANGES}" == "yes" ]] ; then
COMMIT_SUCCESSFUL="no"
git commit -a -m "Sync docs from ${CURRENT_BRANCH} to gh-pages" && COMMIT_SUCCESSFUL="yes" || echo "Failed to commit changes"
# Uncomment the following push if you want to auto push to
# the gh-pages branch whenever you commit to master locally.
# This is a little extreme. Use with care!
###################################################################
if [[ "${COMMIT_SUCCESSFUL}" == "yes" ]] ; then
git push origin gh-pages
fi
fi
}
# Switch back to the previous branch and exit block
function checkout_previous_branch() {
# If -version was provided we need to come back to root project
cd ${ROOT_FOLDER}
git checkout ${CURRENT_BRANCH} || echo "Failed to check the branch... continuing with the script"
if [ "$dirty" != "0" ]; then git stash pop; fi
exit 0
}
# Assert if properties have been properly passed
function assert_properties() {
echo "VERSION [${VERSION}], DESTINATION [${DESTINATION}], CLONE [${CLONE}]"
if [[ "${VERSION}" != "" && (-z "${DESTINATION}" && -z "${CLONE}") ]] ; then echo "Version was set but destination / clone was not!"; exit 1;fi
if [[ ("${DESTINATION}" != "" && "${CLONE}" != "") && -z "${VERSION}" ]] ; then echo "Destination / clone was set but version was not!"; exit 1;fi
if [[ "${DESTINATION}" != "" && "${CLONE}" == "yes" ]] ; then echo "Destination and clone was set. Pick one!"; exit 1;fi
}
# Prints the usage
function print_usage() {
cat <<EOF
The idea of this script is to update gh-pages branch with the generated docs. Without any options
the script will work in the following manner:
- if there's no gh-pages / target for docs module then the script ends
- for master branch the generated docs are copied to the root of gh-pages branch
- for any other branch (if that branch is whitelisted) a subfolder with branch name is created
and docs are copied there
- if the version switch is passed (-v) then a tag with (v) prefix will be retrieved and a folder
with that version number will be created in the gh-pages branch. WARNING! No whitelist verification will take place
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
switch to gh-pages of that repo and copy the generated docs to `docs/<project-name>/<version>`
- if the destination switch is passed (-d) then the script will check if the provided dir is a git repo and then will
switch to gh-pages of that repo and copy the generated docs to `docs/<project-name>/<version>`
USAGE:
You can use the following options:
-v|--version - the script will apply the whole procedure for a particular library version
-d|--destination - the root of destination folder where the docs should be copied. You have to use the full path.
E.g. point to spring-cloud-static folder. Can't be used with (-c)
-b|--build - will run the standard build process after checking out the branch
-c|--clone - will automatically clone the spring-cloud-static repo instead of providing the destination.
Obviously can't be used with (-d)
EOF
}
# ==========================================
# ____ ____ _____ _____ _____ _______
# / ____|/ ____| __ \|_ _| __ \__ __|
# | (___ | | | |__) | | | | |__) | | |
# \___ \| | | _ / | | | ___/ | |
# ____) | |____| | \ \ _| |_| | | |
# |_____/ \_____|_| \_\_____|_| |_|
#
# ==========================================
while [[ $# > 0 ]]
do
key="$1"
case ${key} in
-v|--version)
VERSION="$2"
shift # past argument
;;
-d|--destination)
DESTINATION="$2"
shift # past argument
;;
-b|--build)
BUILD="yes"
;;
-c|--clone)
CLONE="yes"
;;
-h|--help)
print_usage
exit 0
;;
*)
echo "Invalid option: [$1]"
print_usage
exit 1
;;
esac
shift # past argument or value
done
assert_properties
set_default_props
check_if_anything_to_sync
if [[ -z "${VERSION}" ]] ; then
retrieve_current_branch
else
switch_to_tag
fi
build_docs_if_applicable
retrieve_doc_properties
stash_changes
add_docs_from_target
checkout_previous_branch

View File

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,712 @@
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs.
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
--
== Apache Kafka Binder
=== Usage
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
=== Overview
The following image shows a simplified diagram of how the Apache Kafka binder operates:
.Kafka Binder
image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"]
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`.
This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available.
For example, with versions earlier than 0.11.x.x, native headers are not supported.
Also, 0.11.x.x does not support the `autoAddPartitions` property.
=== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
==== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder connects.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings.
Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate.
Properties here supersede any properties set in boot.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.consumerProperties::
Key/Value map of arbitrary Kafka client consumer properties.
In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that are transported by the binder.
Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively.
+
Default: empty.
spring.cloud.stream.kafka.binder.healthTimeout::
The time to wait to get partition information, in seconds.
Health reports as down if this timer expires.
+
Default: 10.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
See the Kafka documentation for the producer `acks` property.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder configures on topics on which it produces or consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.producerProperties::
Key/Value map of arbitrary Kafka client producer properties.
In addition to support known Kafka producer properties, unknown producer properties are allowed here as well.
Properties here supersede any properties set in boot and in the `configuration` property above.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
Can be overridden on each binding.
+
Default: `1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder creates new partitions if required.
If set to `false`, the binder relies on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder fails to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix::
Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation.
When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties.
+
Default `null` (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*::
Global producer properties for producers in a transactional binder.
See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <<kafka-producer-properties>> and the general producer properties supported by all binders.
+
Default: See individual producer properties.
spring.cloud.stream.kafka.binder.headerMapperBeanName::
The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers.
Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers.
If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder.
+
Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
autoRebalanceEnabled::
When `true`, topic partitions is automatically rebalanced between the members of a consumer group.
When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`.
This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance.
The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case.
+
Default: `true`.
ackEachRecord::
When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed.
By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed.
The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property.
Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs.
Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets.
+
Default: `false`.
autoCommitOffset::
Whether to autocommit offsets when a message has been processed.
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
Applications may use this header for acknowledging messages.
See the examples section for details.
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
Also see `ackEachRecord`.
+
Default: `true`.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it always auto-commits (if auto-commit is enabled).
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
+
Default: not set.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by startOffset.
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
+
Default: `false`.
startOffset::
The starting offset for new groups.
Allowed values: `earliest` and `latest`.
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
Also see `resetOffsets` (earlier in this list).
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it enables DLQ behavior for the consumer.
By default, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`.
The DLQ topic name can be configurable by setting the `dlqName` property.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
See <<kafka-dlq-processing>> processing for more information.
Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`.
By default, a failed record is sent to the same partition number in the DLQ topic as the original record.
See <<dlq-partition-selection>> for how to change that behavior.
**Not allowed when `destinationIsPattern` is `true`.**
+
Default: `false`.
dlqPartitions::
When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created.
Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record.
This behavior can be changed; see <<dlq-partition-selection>>.
If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`.
If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean.
Note that the actual partition count is affected by the binder's `minPartitionCount` property.
+
Default: `none`
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
In addition to having Kafka consumer properties, other configuration properties can be passed here.
For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`.
+
Default: Empty map.
dlqName::
The name of the DLQ topic to receive the error messages.
+
Default: null (If not specified, messages that result in errors are forwarded to a topic named `error.<destination>.<group>`).
dlqProducerProperties::
Using this, DLQ-specific producer properties can be set.
All the properties available through kafka producer properties can be set through this property.
When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ.
This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`.
+
Default: Default Kafka producer properties.
standardHeaders::
Indicates which standard headers are populated by the inbound channel adapter.
Allowed values: `none`, `id`, `timestamp`, or `both`.
Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store).
+
Default: `none`
converterBeanName::
The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`.
+
Default: `null`
idleEventInterval::
The interval, in milliseconds, between events indicating that no messages have recently been received.
Use an `ApplicationListener<ListenerContainerIdleEvent>` to receive these events.
See <<pause-resume>> for a usage example.
+
Default: `30000`
destinationIsPattern::
When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker.
When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase.
Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes).
This can be configured using the `configuration` property above.
+
Default: `false`
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0`
+
Default: none.
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
pollTimeout::
Timeout used for polling in pollable consumers.
+
Default: 5 seconds.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Consuming Batches
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
Otherwise, the method will be called with one record at a time.
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1.
You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder.
You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered.
Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques.
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
admin.configuration::
Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version.
admin.replicas-assignment::
Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version.
admin.replication-factor::
Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
sendTimeoutExpression::
A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`.
The value of the timeout is in milliseconds.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
batchTimeout::
How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
(Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`.
Patterns can begin or end with the wildcard character (asterisk).
Patterns can be negated by prefixing with `!`.
Matching stops after the first match (positive or negative).
For example `!ask,as*` will pass `ash` but not `ask`.
`id` and `timestamp` are never mapped.
+
Default: `*` (all headers - except the `id` and `timestamp`)
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
topic.properties::
A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0`
+
topic.replicas-assignment::
A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments.
Used when provisioning new topics.
See the `NewTopic` Javadocs in the `kafka-clients` jar.
+
Default: none.
topic.replication-factor::
The replication factor to use when provisioning topics. Overrides the binder-wide setting.
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
compression::
Set the `compression.type` producer property.
Supported values are `none`, `gzip`, `snappy` and `lz4`.
If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings.<binding-name>.producer.configuration.compression.type=zstd`.
+
Default: `none`.
transactionManager::
Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding.
Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`.
To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager.
+
Default: none.
==== Usage examples
In this section, we show the use of the preceding properties for specific scenarios.
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset` be set to `false`.
Use the corresponding input channel name for your example.
[source]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
----
===== Example: Security Configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, to set `security.protocol` to `SASL_SSL`, set the following property:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
====== Using JAAS Configuration Files
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
[source,bash]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
----
====== Using Spring Boot Properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
The following properties can be used to configure the login context of the Kafka client:
spring.cloud.stream.kafka.binder.jaas.loginModule::
The login module name. Not necessary to be set in normal cases.
+
Default: `com.sun.security.auth.module.Krb5LoginModule`.
spring.cloud.stream.kafka.binder.jaas.controlFlag::
The control flag of the login module.
+
Default: `required`.
spring.cloud.stream.kafka.binder.jaas.options::
Map with a key/value pair containing the login module options.
+
Default: Empty map.
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:
[source,bash]
----
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
----
The preceding example represents the equivalent of the following JAAS file:
[source]
----
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};
----
If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.
NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties.
NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos.
Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper.
Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
[[pause-resume]]
===== Example: Pausing and Resuming the Consumer
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances.
The frequency at which events are published is controlled by the `idleEventInterval` property.
Since the consumer is not thread-safe, you must call these methods on the calling thread.
The following simple application shows how to pause and resume:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@StreamListener(Sink.INPUT)
public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
@Bean
public ApplicationListener<ListenerContainerIdleEvent> idleListener() {
return event -> {
System.out.println(event);
if (event.getConsumer().paused().size() > 0) {
event.getConsumer().resume(event.getConsumer().paused());
}
};
}
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too.
When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level.
Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders,
@Value("${unique.tx.id.per.instance}") String txId) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionId(txId)
return tm;
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`.
[[kafka-error-channels]]
=== Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
See <<spring-cloud-stream-overview-error-handling>> for more information.
The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties:
* `failedMessage`: The Spring Messaging `Message<?>` that failed to be sent.
* `record`: The raw `ProducerRecord` that was created from the `failedMessage`
There is no automatic handling of producer exceptions (such as sending to a <<kafka-dlq-processing, Dead-Letter queue>>).
You can consume these exceptions with your own Spring Integration flow.
[[kafka-metrics]]
=== Kafka Metrics
Kafka binder module exposes the following metrics:
`spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group.
The metrics provided are based on the Mircometer metrics library. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic.
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
[[kafka-tombstones]]
=== Tombstone Records (null record values)
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
====
[source, java]
----
@StreamListener(Sink.INPUT)
public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
@Payload(required = false) Customer customer) {
// customer is null if a tombstone record
...
}
----
====
[[rebalance-listener]]
=== Using a KafkaRebalanceListener
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
====
[source, java]
----
public interface KafkaBindingRebalanceListener {
/**
* Invoked by the container before any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
}
/**
* Invoked by the container after any pending offsets are committed.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
*/
default void onPartitionsRevokedAfterCommit(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
}
/**
* Invoked when partitions are initially assigned or after a rebalance.
* Applications might only want to perform seek operations on an initial assignment.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer, Collection<TopicPartition> partitions,
boolean initial) {
}
}
----
====
You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener.

View File

@@ -0,0 +1,103 @@
=== Partitioning with the Kafka Binder
Apache Kafka supports topic partitioning natively.
Sometimes it is advantageous to send data to specific partitions -- for example, when you want to strictly order message processing (all messages for a particular customer should go to the same partition).
The following example shows how to configure the producer and consumer side:
[source, java]
----
@SpringBootApplication
@EnableBinding(Source.class)
public class KafkaPartitionProducerApplication {
private static final Random RANDOM = new Random(System.currentTimeMillis());
private static final String[] data = new String[] {
"foo1", "bar1", "qux1",
"foo2", "bar2", "qux2",
"foo3", "bar3", "qux3",
"foo4", "bar4", "qux4",
};
public static void main(String[] args) {
new SpringApplicationBuilder(KafkaPartitionProducerApplication.class)
.web(false)
.run(args);
}
@InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))
public Message<?> generate() {
String value = data[RANDOM.nextInt(data.length)];
System.out.println("Sending: " + value);
return MessageBuilder.withPayload(value)
.setHeader("partitionKey", value)
.build();
}
}
----
.application.yml
[source, yaml]
----
spring:
cloud:
stream:
bindings:
output:
destination: partitioned.topic
producer:
partition-key-expression: headers['partitionKey']
partition-count: 12
----
IMPORTANT: The topic must be provisioned to have enough partitions to achieve the desired concurrency for all consumer groups.
The above configuration supports up to 12 consumer instances (6 if their `concurrency` is 2, 4 if their concurrency is 3, and so on).
It is generally best to "`over-provision`" the partitions to allow for future increases in consumers or concurrency.
NOTE: The preceding configuration uses the default partitioning (`key.hashCode() % partitionCount`).
This may or may not provide a suitably balanced algorithm, depending on the key values.
You can override this default by using the `partitionSelectorExpression` or `partitionSelectorClass` properties.
Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side.
Kafka allocates partitions across the instances.
The following Spring Boot application listens to a Kafka stream and prints (to the console) the partition ID to which each message goes:
[source, java]
----
@SpringBootApplication
@EnableBinding(Sink.class)
public class KafkaPartitionConsumerApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(KafkaPartitionConsumerApplication.class)
.web(false)
.run(args);
}
@StreamListener(Sink.INPUT)
public void listen(@Payload String in, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println(in + " received from partition " + partition);
}
}
----
.application.yml
[source, yaml]
----
spring:
cloud:
stream:
bindings:
input:
destination: partitioned.topic
group: myGroup
----
You can add instances as needed.
Kafka rebalances the partition allocations.
If the instance count (or `instance count * concurrency`) exceeds the number of partitions, some consumers are idle.

View File

@@ -0,0 +1,3 @@
include::overview.adoc[leveloffset=+1]
include::dlq.adoc[leveloffset=+1]
include::partitions.adoc[leveloffset=+1]

View File

@@ -0,0 +1,53 @@
:github-tag: master
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: https://raw.githubusercontent.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:toc: left
:toclevels: 8
:nofooter:
:sectlinks: true
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein, Henryk Konsek, Gary Russell, Arnaud Jardiné, Soby Chacko
:doctype: book
:toc:
:toclevels: 4
:source-highlighter: prettify
:numbered:
:icons: font
:hide-uri-scheme:
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: https://raw.github.com/{github-repo}/{github-tag}
:github-code: https://github.com/{github-repo}/tree/{github-tag}
:github-wiki: https://github.com/{github-repo}/wiki
:github-master-code: https://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
*{spring-cloud-stream-version}*
= Reference Guide
include::overview.adoc[]
include::dlq.adoc[]
include::partitions.adoc[]
include::kafka-streams.adoc[]
= Appendices
[appendix]
include::building.adoc[]
include::contributing.adoc[]
// ======================================================================================

View File

@@ -0,0 +1,37 @@
#!/usr/bin/env ruby
base_dir = File.join(File.dirname(__FILE__),'../../..')
src_dir = File.join(base_dir, "/src/main/asciidoc")
require 'asciidoctor'
require 'optparse'
options = {}
file = "#{src_dir}/README.adoc"
OptionParser.new do |o|
o.on('-o OUTPUT_FILE', 'Output file (default is stdout)') { |file| options[:to_file] = file unless file=='-' }
o.on('-h', '--help') { puts o; exit }
o.parse!
end
file = ARGV[0] if ARGV.length>0
# Copied from https://github.com/asciidoctor/asciidoctor-extensions-lab/blob/master/scripts/asciidoc-coalescer.rb
doc = Asciidoctor.load_file file, safe: :unsafe, header_only: true, attributes: options[:attributes]
header_attr_names = (doc.instance_variable_get :@attributes_modified).to_a
header_attr_names.each {|k| doc.attributes[%(#{k}!)] = '' unless doc.attr? k }
attrs = doc.attributes
attrs['allow-uri-read'] = true
puts attrs
out = "// Do not edit this file (e.g. go instead to src/main/asciidoc)\n\n"
doc = Asciidoctor.load_file file, safe: :unsafe, parse: false, attributes: attrs
out << doc.reader.read
unless options[:to_file]
puts out
else
File.open(options[:to_file],'w+') do |file|
file.write(out)
end
end

180
mvnw vendored
View File

@@ -19,7 +19,7 @@
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven2 Start Up Batch script
# Maven Start Up Batch script
#
# Required ENV vars:
# ------------------
@@ -54,38 +54,16 @@ case "`uname`" in
CYGWIN*) cygwin=true ;;
MINGW*) mingw=true;;
Darwin*) darwin=true
#
# Look for the Apple JDKs first to preserve the existing behaviour, and then look
# for the new JDKs provided by Oracle.
#
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Java/JavaVirtualMachines/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L "/Library/Java/JavaVirtualMachines/CurrentJDK" ] ; then
#
# Oracle JDKs
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -x "/usr/libexec/java_home" ]; then
#
# Apple JDKs
#
export JAVA_HOME=`/usr/libexec/java_home`
fi
;;
# Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
# See https://developer.apple.com/library/mac/qa/qa1170/_index.html
if [ -z "$JAVA_HOME" ]; then
if [ -x "/usr/libexec/java_home" ]; then
export JAVA_HOME="`/usr/libexec/java_home`"
else
export JAVA_HOME="/Library/Java/Home"
fi
fi
;;
esac
if [ -z "$JAVA_HOME" ] ; then
@@ -130,13 +108,12 @@ if $cygwin ; then
CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
# For Migwn, ensure paths are in UNIX format before anything is touched
# For Mingw, ensure paths are in UNIX format before anything is touched
if $mingw ; then
[ -n "$M2_HOME" ] &&
M2_HOME="`(cd "$M2_HOME"; pwd)`"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
# TODO classpath?
fi
if [ -z "$JAVA_HOME" ]; then
@@ -184,27 +161,28 @@ fi
CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
fi
# traverses directory structure from process work directory to filesystem root
# first directory with .mvn subdirectory is considered project base directory
find_maven_basedir() {
local basedir=$(pwd)
local wdir=$(pwd)
if [ -z "$1" ]
then
echo "Path not specified to find_maven_basedir"
return 1
fi
basedir="$1"
wdir="$1"
while [ "$wdir" != '/' ] ; do
if [ -d "$wdir"/.mvn ] ; then
basedir=$wdir
break
fi
wdir=$(cd "$wdir/.."; pwd)
# workaround for JBEAP-8937 (on Solaris 10/Sparc)
if [ -d "${wdir}" ]; then
wdir=`cd "$wdir/.."; pwd`
fi
# end of workaround
done
echo "${basedir}"
}
@@ -216,10 +194,109 @@ concat_lines() {
fi
}
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-$(find_maven_basedir)}
BASE_DIR=`find_maven_basedir "$(pwd)"`
if [ -z "$BASE_DIR" ]; then
exit 1;
fi
##########################################################################################
# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
# This allows using the maven wrapper in projects that prohibit checking in binary data.
##########################################################################################
if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found .mvn/wrapper/maven-wrapper.jar"
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
if [ -n "$MVNW_REPOURL" ]; then
jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
else
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
fi
while IFS="=" read key value; do
case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
esac
done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
if [ "$MVNW_VERBOSE" = true ]; then
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if $cygwin; then
wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"`
fi
if command -v wget > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
wget "$jarUrl" -O "$wrapperJarPath"
else
wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath"
fi
elif command -v curl > /dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
curl -o "$wrapperJarPath" "$jarUrl" -f
else
curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
# For Cygwin, switch paths to Windows format before running javac
if $cygwin; then
javaClass=`cygpath --path --windows "$javaClass"`
fi
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Compiling MavenWrapperDownloader.java ..."
fi
# Compiling the Java class
("$JAVA_HOME/bin/javac" "$javaClass")
fi
if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
# Running the downloader
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Running MavenWrapperDownloader.java ..."
fi
("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
fi
fi
fi
fi
##########################################################################################
# End of extension
##########################################################################################
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
if [ "$MVNW_VERBOSE" = true ]; then
echo $MAVEN_PROJECTBASEDIR
fi
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
# Provide a "standardized" way to retrieve the CLI args that will
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
[ -n "$MAVEN_PROJECTBASEDIR" ] &&
MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
fi
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
@@ -230,5 +307,4 @@ exec "$JAVACMD" \
$MAVEN_OPTS \
-classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
"-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
${WRAPPER_LAUNCHER} "$@"
${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"

358
mvnw.cmd vendored
View File

@@ -1,234 +1,182 @@
#!/bin/sh
# ----------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM http://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven2 Start Up Batch script
#
# Required ENV vars:
# ------------------
# JAVA_HOME - location of a JDK home dir
#
# Optional ENV vars
# -----------------
# M2_HOME - location of maven2's installed home dir
# MAVEN_OPTS - parameters passed to the Java VM when running Maven
# e.g. to debug Maven itself, use
# set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
# MAVEN_SKIP_RC - flag to disable loading of mavenrc files
# ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
if [ -z "$MAVEN_SKIP_RC" ] ; then
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
if [ -f /etc/mavenrc ] ; then
. /etc/mavenrc
fi
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
if [ -f "$HOME/.mavenrc" ] ; then
. "$HOME/.mavenrc"
fi
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
fi
@setlocal
# OS specific support. $var _must_ be set to either true or false.
cygwin=false;
darwin=false;
mingw=false
case "`uname`" in
CYGWIN*) cygwin=true ;;
MINGW*) mingw=true;;
Darwin*) darwin=true
#
# Look for the Apple JDKs first to preserve the existing behaviour, and then look
# for the new JDKs provided by Oracle.
#
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L /System/Library/Java/JavaVirtualMachines/CurrentJDK ] ; then
#
# Apple JDKs
#
export JAVA_HOME=/System/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
if [ -z "$JAVA_HOME" ] && [ -L "/Library/Java/JavaVirtualMachines/CurrentJDK" ] ; then
#
# Oracle JDKs
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/CurrentJDK/Contents/Home
fi
set ERROR_CODE=0
if [ -z "$JAVA_HOME" ] && [ -x "/usr/libexec/java_home" ]; then
#
# Apple JDKs
#
export JAVA_HOME=`/usr/libexec/java_home`
fi
;;
esac
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
if [ -z "$JAVA_HOME" ] ; then
if [ -r /etc/gentoo-release ] ; then
JAVA_HOME=`java-config --jre-home`
fi
fi
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
if [ -z "$M2_HOME" ] ; then
## resolve links - $0 may be a link to maven's home
PRG="$0"
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
# need this for relative symlinks
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG="`dirname "$PRG"`/$link"
fi
done
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
saveddir=`pwd`
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
M2_HOME=`dirname "$PRG"`/..
@REM ==== END VALIDATION ====
# make it fully qualified
M2_HOME=`cd "$M2_HOME" && pwd`
:init
cd "$saveddir"
# echo Using m2 at $M2_HOME
fi
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
# For Cygwin, ensure paths are in UNIX format before anything is touched
if $cygwin ; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --unix "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
# For Migwn, ensure paths are in UNIX format before anything is touched
if $mingw ; then
[ -n "$M2_HOME" ] &&
M2_HOME="`(cd "$M2_HOME"; pwd)`"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
# TODO classpath?
fi
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
if [ -z "$JAVA_HOME" ]; then
javaExecutable="`which javac`"
if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then
# readlink(1) is not available as standard on Solaris 10.
readLink=`which readlink`
if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then
if $darwin ; then
javaHome="`dirname \"$javaExecutable\"`"
javaExecutable="`cd \"$javaHome\" && pwd -P`/javac"
else
javaExecutable="`readlink -f \"$javaExecutable\"`"
fi
javaHome="`dirname \"$javaExecutable\"`"
javaHome=`expr "$javaHome" : '\(.*\)/bin'`
JAVA_HOME="$javaHome"
export JAVA_HOME
fi
fi
fi
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
if [ -z "$JAVACMD" ] ; then
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
else
JAVACMD="`which java`"
fi
fi
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
if [ ! -x "$JAVACMD" ] ; then
echo "Error: JAVA_HOME is not defined correctly." >&2
echo " We cannot execute $JAVACMD" >&2
exit 1
fi
:endDetectBaseDir
if [ -z "$JAVA_HOME" ] ; then
echo "Warning: JAVA_HOME environment variable is not set."
fi
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=`cygpath --path --windows "$M2_HOME"`
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
[ -n "$CLASSPATH" ] &&
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
fi
:endReadAdditionalConfig
# traverses directory structure from process work directory to filesystem root
# first directory with .mvn subdirectory is considered project base directory
find_maven_basedir() {
local basedir=$(pwd)
local wdir=$(pwd)
while [ "$wdir" != '/' ] ; do
if [ -d "$wdir"/.mvn ] ; then
basedir=$wdir
break
fi
wdir=$(cd "$wdir/.."; pwd)
done
echo "${basedir}"
}
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
# concatenates all lines of a file
concat_lines() {
if [ -f "$1" ]; then
echo "$(tr -s '\n' ' ' < "$1")"
fi
}
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-$(find_maven_basedir)}
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
if "%MVNW_VERBOSE%" == "true" (
echo Found %WRAPPER_JAR%
)
) else (
if not "%MVNW_REPOURL%" == "" (
SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
)
if "%MVNW_VERBOSE%" == "true" (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
)
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
powershell -Command "&{"^
"$webclient = new-object System.Net.WebClient;"^
"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
"}"^
"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
"}"
if "%MVNW_VERBOSE%" == "true" (
echo Finished downloading %WRAPPER_JAR%
)
)
@REM End of extension
exec "$JAVACMD" \
$MAVEN_OPTS \
-classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
"-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
${WRAPPER_LAUNCHER} "$@"
@REM Provide a "standardized" way to retrieve the CLI args that will
@REM work with both Windows and non-Windows executions.
set MAVEN_CMD_LINE_ARGS=%*
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%

174
pom.xml
View File

@@ -1,54 +1,138 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.1.0.M1</version>
<version>3.1.0.M1</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-parent</artifactId>
<version>1.1.0.M1</version>
<artifactId>spring-cloud-build</artifactId>
<version>3.0.0.M1</version>
<relativePath />
</parent>
<properties>
<java.version>1.7</java.version>
<kafka.version>0.9.0.1</kafka.version>
<spring-kafka.version>1.0.3.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>2.0.1.RELEASE</spring-integration-kafka.version>
<java.version>1.8</java.version>
<spring-kafka.version>2.4.4.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.2.1.RELEASE</spring-integration-kafka.version>
<kafka.version>2.4.0</kafka.version>
<spring-cloud-schema-registry.version>1.1.0.M1</spring-cloud-schema-registry.version>
<spring-cloud-stream.version>3.1.0.M1</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
</properties>
<modules>
<module>spring-cloud-stream-binder-kafka</module>
<module>spring-cloud-starter-stream-kafka</module>
<module>spring-cloud-stream-binder-kafka-test-support</module>
<module>spring-cloud-stream-binder-kafka-docs</module>
</modules>
<module>spring-cloud-stream-binder-kafka-core</module>
<module>spring-cloud-stream-binder-kafka-streams</module>
<module>docs</module>
</modules>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<version>${spring-cloud-stream.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>${spring-integration-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<version>${spring-cloud-stream.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
<version>${spring-kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<classifier>test</classifier>
<scope>test</scope>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<version>${spring-cloud-schema-registry.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<redirectTestOutputToFile>true</redirectTestOutputToFile>
</configuration>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.17</version>
<dependencies>
<dependency>
<groupId>com.puppycrawl.tools</groupId>
<artifactId>checkstyle</artifactId>
<version>6.17</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
@@ -56,9 +140,23 @@
<quiet>true</quiet>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<redirectTestOutputToFile>true</redirectTestOutputToFile>
</configuration>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>spring</id>
@@ -66,7 +164,7 @@
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -77,7 +175,7 @@
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -85,7 +183,7 @@
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/release</url>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -95,7 +193,7 @@
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>http://repo.spring.io/libs-snapshot-local</url>
<url>https://repo.spring.io/libs-snapshot-local</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
@@ -106,7 +204,7 @@
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>http://repo.spring.io/libs-milestone-local</url>
<url>https://repo.spring.io/libs-milestone-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -114,7 +212,7 @@
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>http://repo.spring.io/libs-release-local</url>
<url>https://repo.spring.io/libs-release-local</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
@@ -122,4 +220,12 @@
</pluginRepositories>
</profile>
</profiles>
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
</plugin>
</plugins>
</reporting>
</project>

View File

View File

@@ -1,17 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.1.0.M1</version>
<version>3.1.0.M1</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>
<url>http://projects.spring.io/spring-cloud</url>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>http://www.spring.io</url>
<url>https://www.spring.io</url>
</organization>
<properties>
<main.basedir>${basedir}/../..</main.basedir>
@@ -20,7 +20,7 @@
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>1.1.0.M1</version>
<version>${project.version}</version>
</dependency>
</dependencies>
</project>

View File

@@ -1 +0,0 @@
provides: spring-cloud-starter-stream-kafka

View File

@@ -0,0 +1,5 @@
eclipse.preferences.version=1
org.eclipse.jdt.ui.ignorelowercasenames=true
org.eclipse.jdt.ui.importorder=java;javax;com;org;org.springframework;ch.qos;\#;
org.eclipse.jdt.ui.ondemandthreshold=99
org.eclipse.jdt.ui.staticondemandthreshold=99

View File

@@ -0,0 +1,49 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.0.M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>
<url>https://projects.spring.io/spring-cloud</url>
<organization>
<name>Pivotal Software, Inc.</name>
<url>https://www.spring.io</url>
</organization>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,69 @@
/*
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import javax.security.auth.login.AppConfigurationEntry;
import org.springframework.kafka.security.jaas.KafkaJaasLoginModuleInitializer;
import org.springframework.util.Assert;
/**
* Contains properties for setting up an {@link AppConfigurationEntry} that can be used
* for the Kafka or Zookeeper client.
*
* @author Marius Bogoevici
* @author Soby Chacko
*/
public class JaasLoginModuleConfiguration {
private String loginModule = "com.sun.security.auth.module.Krb5LoginModule";
private KafkaJaasLoginModuleInitializer.ControlFlag controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag.REQUIRED;
private Map<String, String> options = new HashMap<>();
public String getLoginModule() {
return this.loginModule;
}
public void setLoginModule(String loginModule) {
Assert.notNull(loginModule, "cannot be null");
this.loginModule = loginModule;
}
public KafkaJaasLoginModuleInitializer.ControlFlag getControlFlag() {
return this.controlFlag;
}
public void setControlFlag(String controlFlag) {
Assert.notNull(controlFlag, "cannot be null");
this.controlFlag = KafkaJaasLoginModuleInitializer.ControlFlag
.valueOf(controlFlag.toUpperCase());
}
public Map<String, String> getOptions() {
return this.options;
}
public void setOptions(Map<String, String> options) {
this.options = options;
}
}

View File

@@ -0,0 +1,561 @@
/*
* Copyright 2015-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.time.Duration;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.HeaderMode;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties.CompressionType;
import org.springframework.expression.Expression;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Configuration properties for the Kafka binder. The properties in this class are
* prefixed with <b>spring.cloud.stream.kafka.binder</b>.
*
* @author David Turanski
* @author Ilayaperumal Gopinathan
* @author Marius Bogoevici
* @author Soby Chacko
* @author Gary Russell
* @author Rafal Zukowski
* @author Aldo Sinanaj
* @author Lukasz Kaminski
*/
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.binder")
public class KafkaBinderConfigurationProperties {
private static final String DEFAULT_KAFKA_CONNECTION_STRING = "localhost:9092";
private final Log logger = LogFactory.getLog(getClass());
private final Transaction transaction = new Transaction();
private final KafkaProperties kafkaProperties;
/**
* Arbitrary kafka properties that apply to both producers and consumers.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Arbitrary kafka consumer properties.
*/
private Map<String, String> consumerProperties = new HashMap<>();
/**
* Arbitrary kafka producer properties.
*/
private Map<String, String> producerProperties = new HashMap<>();
private String[] brokers = new String[] { "localhost" };
private String defaultBrokerPort = "9092";
private String[] headers = new String[] {};
private boolean autoCreateTopics = true;
private boolean autoAddPartitions;
private String requiredAcks = "1";
private short replicationFactor = 1;
private int minPartitionCount = 1;
/**
* Time to wait to get partition information in seconds; default 60.
*/
private int healthTimeout = 60;
private JaasLoginModuleConfiguration jaas;
/**
* The bean name of a custom header mapper to use instead of a
* {@link org.springframework.kafka.support.DefaultKafkaHeaderMapper}.
*/
private String headerMapperBeanName;
/**
* Time between retries after AuthorizationException is caught in
* the ListenerContainer; defalt is null which disables retries.
* For more info see: {@link org.springframework.kafka.listener.ConsumerProperties#setAuthorizationExceptionRetryInterval(java.time.Duration)}
*/
private Duration authorizationExceptionRetryInterval;
public KafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
Assert.notNull(kafkaProperties, "'kafkaProperties' cannot be null");
this.kafkaProperties = kafkaProperties;
}
public KafkaProperties getKafkaProperties() {
return this.kafkaProperties;
}
public Transaction getTransaction() {
return this.transaction;
}
public String getKafkaConnectionString() {
return toConnectionString(this.brokers, this.defaultBrokerPort);
}
public String getDefaultKafkaConnectionString() {
return DEFAULT_KAFKA_CONNECTION_STRING;
}
public String[] getHeaders() {
return this.headers;
}
public String[] getBrokers() {
return this.brokers;
}
public void setBrokers(String... brokers) {
this.brokers = brokers;
}
public void setDefaultBrokerPort(String defaultBrokerPort) {
this.defaultBrokerPort = defaultBrokerPort;
}
public void setHeaders(String... headers) {
this.headers = headers;
}
/**
* Converts an array of host values to a comma-separated String. It will append the
* default port value, if not already specified.
* @param hosts host string
* @param defaultPort port
* @return formatted connection string
*/
private String toConnectionString(String[] hosts, String defaultPort) {
String[] fullyFormattedHosts = new String[hosts.length];
for (int i = 0; i < hosts.length; i++) {
if (hosts[i].contains(":") || StringUtils.isEmpty(defaultPort)) {
fullyFormattedHosts[i] = hosts[i];
}
else {
fullyFormattedHosts[i] = hosts[i] + ":" + defaultPort;
}
}
return StringUtils.arrayToCommaDelimitedString(fullyFormattedHosts);
}
public String getRequiredAcks() {
return this.requiredAcks;
}
public void setRequiredAcks(String requiredAcks) {
this.requiredAcks = requiredAcks;
}
public short getReplicationFactor() {
return this.replicationFactor;
}
public void setReplicationFactor(short replicationFactor) {
this.replicationFactor = replicationFactor;
}
public int getMinPartitionCount() {
return this.minPartitionCount;
}
public void setMinPartitionCount(int minPartitionCount) {
this.minPartitionCount = minPartitionCount;
}
public int getHealthTimeout() {
return this.healthTimeout;
}
public void setHealthTimeout(int healthTimeout) {
this.healthTimeout = healthTimeout;
}
public boolean isAutoCreateTopics() {
return this.autoCreateTopics;
}
public void setAutoCreateTopics(boolean autoCreateTopics) {
this.autoCreateTopics = autoCreateTopics;
}
public boolean isAutoAddPartitions() {
return this.autoAddPartitions;
}
public void setAutoAddPartitions(boolean autoAddPartitions) {
this.autoAddPartitions = autoAddPartitions;
}
public Map<String, String> getConfiguration() {
return this.configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
public Map<String, String> getConsumerProperties() {
return this.consumerProperties;
}
public void setConsumerProperties(Map<String, String> consumerProperties) {
Assert.notNull(consumerProperties, "'consumerProperties' cannot be null");
this.consumerProperties = consumerProperties;
}
public Map<String, String> getProducerProperties() {
return this.producerProperties;
}
public void setProducerProperties(Map<String, String> producerProperties) {
Assert.notNull(producerProperties, "'producerProperties' cannot be null");
this.producerProperties = producerProperties;
}
/**
* Merge boot consumer properties, general properties from
* {@link #setConfiguration(Map)} that apply to consumers, properties from
* {@link #setConsumerProperties(Map)}, in that order.
* @return the merged properties.
*/
public Map<String, Object> mergedConsumerConfiguration() {
Map<String, Object> consumerConfiguration = new HashMap<>();
consumerConfiguration.putAll(this.kafkaProperties.buildConsumerProperties());
// Copy configured binder properties that apply to consumers
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ConsumerConfig.configNames().contains(configurationEntry.getKey())) {
consumerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
consumerConfiguration.putAll(this.consumerProperties);
filterStreamManagedConfiguration(consumerConfiguration);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(consumerConfiguration,
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
/**
* Merge boot producer properties, general properties from
* {@link #setConfiguration(Map)} that apply to producers, properties from
* {@link #setProducerProperties(Map)}, in that order.
* @return the merged properties.
*/
public Map<String, Object> mergedProducerConfiguration() {
Map<String, Object> producerConfiguration = new HashMap<>();
producerConfiguration.putAll(this.kafkaProperties.buildProducerProperties());
// Copy configured binder properties that apply to producers
for (Map.Entry<String, String> configurationEntry : this.configuration
.entrySet()) {
if (ProducerConfig.configNames().contains(configurationEntry.getKey())) {
producerConfiguration.put(configurationEntry.getKey(),
configurationEntry.getValue());
}
}
producerConfiguration.putAll(this.producerProperties);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(producerConfiguration,
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private void filterStreamManagedConfiguration(Map<String, Object> configuration) {
if (configuration.containsKey(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG)
&& configuration.get(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG).equals(true)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG) +
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG + "=true is not supported by the Kafka binder");
configuration.remove(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
if (configuration.containsKey(ConsumerConfig.GROUP_ID_CONFIG)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.GROUP_ID_CONFIG) +
"Use spring.cloud.stream.default.group or spring.cloud.stream.binding.<name>.group to specify " +
"the group instead of " + ConsumerConfig.GROUP_ID_CONFIG);
configuration.remove(ConsumerConfig.GROUP_ID_CONFIG);
}
}
private String constructIgnoredConfigMessage(String config) {
return String.format("Ignoring provided value(s) for '%s'. ", config);
}
private Map<String, Object> getConfigurationWithBootstrapServer(
Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
else {
Object boostrapServersConfig = configuration.get(bootstrapServersConfig);
if (boostrapServersConfig instanceof List) {
@SuppressWarnings("unchecked")
List<String> bootStrapServers = (List<String>) configuration
.get(bootstrapServersConfig);
if (bootStrapServers.size() == 1
&& bootStrapServers.get(0).equals("localhost:9092")) {
configuration.put(bootstrapServersConfig, getKafkaConnectionString());
}
}
}
return Collections.unmodifiableMap(configuration);
}
public JaasLoginModuleConfiguration getJaas() {
return this.jaas;
}
public void setJaas(JaasLoginModuleConfiguration jaas) {
this.jaas = jaas;
}
public String getHeaderMapperBeanName() {
return this.headerMapperBeanName;
}
public void setHeaderMapperBeanName(String headerMapperBeanName) {
this.headerMapperBeanName = headerMapperBeanName;
}
public Duration getAuthorizationExceptionRetryInterval() {
return authorizationExceptionRetryInterval;
}
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
this.authorizationExceptionRetryInterval = authorizationExceptionRetryInterval;
}
/**
* Domain class that models transaction capabilities in Kafka.
*/
public static class Transaction {
private final CombinedProducerProperties producer = new CombinedProducerProperties();
private String transactionIdPrefix;
public String getTransactionIdPrefix() {
return this.transactionIdPrefix;
}
public void setTransactionIdPrefix(String transactionIdPrefix) {
this.transactionIdPrefix = transactionIdPrefix;
}
public CombinedProducerProperties getProducer() {
return this.producer;
}
}
/**
* An combination of {@link ProducerProperties} and {@link KafkaProducerProperties} so
* that common and kafka-specific properties can be set for the transactional
* producer.
*
* @since 2.1
*/
public static class CombinedProducerProperties {
private final ProducerProperties producerProperties = new ProducerProperties();
private final KafkaProducerProperties kafkaProducerProperties = new KafkaProducerProperties();
public Expression getPartitionKeyExpression() {
return this.producerProperties.getPartitionKeyExpression();
}
public void setPartitionKeyExpression(Expression partitionKeyExpression) {
this.producerProperties.setPartitionKeyExpression(partitionKeyExpression);
}
public boolean isPartitioned() {
return this.producerProperties.isPartitioned();
}
public Expression getPartitionSelectorExpression() {
return this.producerProperties.getPartitionSelectorExpression();
}
public void setPartitionSelectorExpression(
Expression partitionSelectorExpression) {
this.producerProperties
.setPartitionSelectorExpression(partitionSelectorExpression);
}
public @Min(value = 1, message = "Partition count should be greater than zero.") int getPartitionCount() {
return this.producerProperties.getPartitionCount();
}
public void setPartitionCount(int partitionCount) {
this.producerProperties.setPartitionCount(partitionCount);
}
public String[] getRequiredGroups() {
return this.producerProperties.getRequiredGroups();
}
public void setRequiredGroups(String... requiredGroups) {
this.producerProperties.setRequiredGroups(requiredGroups);
}
public @AssertTrue(message = "Partition key expression and partition key extractor class properties "
+ "are mutually exclusive.") boolean isValidPartitionKeyProperty() {
return this.producerProperties.isValidPartitionKeyProperty();
}
public @AssertTrue(message = "Partition selector class and partition selector expression "
+ "properties are mutually exclusive.") boolean isValidPartitionSelectorProperty() {
return this.producerProperties.isValidPartitionSelectorProperty();
}
public HeaderMode getHeaderMode() {
return this.producerProperties.getHeaderMode();
}
public void setHeaderMode(HeaderMode headerMode) {
this.producerProperties.setHeaderMode(headerMode);
}
public boolean isUseNativeEncoding() {
return this.producerProperties.isUseNativeEncoding();
}
public void setUseNativeEncoding(boolean useNativeEncoding) {
this.producerProperties.setUseNativeEncoding(useNativeEncoding);
}
public boolean isErrorChannelEnabled() {
return this.producerProperties.isErrorChannelEnabled();
}
public void setErrorChannelEnabled(boolean errorChannelEnabled) {
this.producerProperties.setErrorChannelEnabled(errorChannelEnabled);
}
public String getPartitionKeyExtractorName() {
return this.producerProperties.getPartitionKeyExtractorName();
}
public void setPartitionKeyExtractorName(String partitionKeyExtractorName) {
this.producerProperties
.setPartitionKeyExtractorName(partitionKeyExtractorName);
}
public String getPartitionSelectorName() {
return this.producerProperties.getPartitionSelectorName();
}
public void setPartitionSelectorName(String partitionSelectorName) {
this.producerProperties.setPartitionSelectorName(partitionSelectorName);
}
public int getBufferSize() {
return this.kafkaProducerProperties.getBufferSize();
}
public void setBufferSize(int bufferSize) {
this.kafkaProducerProperties.setBufferSize(bufferSize);
}
public @NotNull CompressionType getCompressionType() {
return this.kafkaProducerProperties.getCompressionType();
}
public void setCompressionType(CompressionType compressionType) {
this.kafkaProducerProperties.setCompressionType(compressionType);
}
public boolean isSync() {
return this.kafkaProducerProperties.isSync();
}
public void setSync(boolean sync) {
this.kafkaProducerProperties.setSync(sync);
}
public int getBatchTimeout() {
return this.kafkaProducerProperties.getBatchTimeout();
}
public void setBatchTimeout(int batchTimeout) {
this.kafkaProducerProperties.setBatchTimeout(batchTimeout);
}
public Expression getMessageKeyExpression() {
return this.kafkaProducerProperties.getMessageKeyExpression();
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
this.kafkaProducerProperties.setMessageKeyExpression(messageKeyExpression);
}
public String[] getHeaderPatterns() {
return this.kafkaProducerProperties.getHeaderPatterns();
}
public void setHeaderPatterns(String[] headerPatterns) {
this.kafkaProducerProperties.setHeaderPatterns(headerPatterns);
}
public Map<String, String> getConfiguration() {
return this.kafkaProducerProperties.getConfiguration();
}
public void setConfiguration(Map<String, String> configuration) {
this.kafkaProducerProperties.setConfiguration(configuration);
}
public KafkaTopicProperties getTopic() {
return this.kafkaProducerProperties.getTopic();
}
public void setTopic(KafkaTopicProperties topic) {
this.kafkaProducerProperties.setTopic(topic);
}
public KafkaProducerProperties getExtension() {
return this.kafkaProducerProperties;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,30 +14,50 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.properties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Container object for Kafka specific extended producer and consumer binding properties.
*
* @author Marius Bogoevici
* @author Oleg Zhurakousky
*/
public class KafkaBindingProperties {
public class KafkaBindingProperties implements BinderSpecificPropertiesProvider {
/**
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
private KafkaConsumerProperties consumer = new KafkaConsumerProperties();
/**
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
private KafkaProducerProperties producer = new KafkaProducerProperties();
/**
* @return {@link KafkaConsumerProperties}
* Consumer specific binding properties. @see {@link KafkaConsumerProperties}.
*/
public KafkaConsumerProperties getConsumer() {
return consumer;
return this.consumer;
}
public void setConsumer(KafkaConsumerProperties consumer) {
this.consumer = consumer;
}
/**
* @return {@link KafkaProducerProperties}
* Producer specific binding properties. @see {@link KafkaProducerProperties}.
*/
public KafkaProducerProperties getProducer() {
return producer;
return this.producer;
}
public void setProducer(KafkaProducerProperties producer) {
this.producer = producer;
}
}

View File

@@ -0,0 +1,485 @@
/*
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
/**
* Extended consumer properties for Kafka binder.
*
* @author Marius Bogoevici
* @author Ilayaperumal Gopinathan
* @author Soby Chacko
* @author Gary Russell
* @author Aldo Sinanaj
*
* <p>
* Thanks to Laszlo Szabo for providing the initial patch for generic property support.
* </p>
*/
public class KafkaConsumerProperties {
/**
* Enumeration for starting consumer offset.
*/
public enum StartOffset {
/**
* Starting from earliest offset.
*/
earliest(-2L),
/**
* Starting from latest offset.
*/
latest(-1L);
private final long referencePoint;
StartOffset(long referencePoint) {
this.referencePoint = referencePoint;
}
public long getReferencePoint() {
return this.referencePoint;
}
}
/**
* Standard headers for the message.
*/
public enum StandardHeaders {
/**
* No headers.
*/
none,
/**
* Message header representing ID.
*/
id,
/**
* Message header representing timestamp.
*/
timestamp,
/**
* Indicating both ID and timestamp headers.
*/
both
}
/**
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
private boolean ackEachRecord;
/**
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
private boolean autoRebalanceEnabled = true;
/**
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
private boolean autoCommitOffset = true;
/**
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*/
private Boolean autoCommitOnError;
/**
* The starting offset for new groups. Allowed values: earliest and latest.
*/
private StartOffset startOffset;
/**
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
private boolean resetOffsets;
/**
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
private boolean enableDlq;
/**
* The name of the DLQ topic to receive the error messages.
*/
private String dlqName;
/**
* Number of partitions to use on the DLQ.
*/
private Integer dlqPartitions;
/**
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
private KafkaProducerProperties dlqProducerProperties = new KafkaProducerProperties();
/**
* @deprecated No longer used by the binder.
*/
@Deprecated
private int recoveryInterval = 5000;
/**
* List of trusted packages to provide the header mapper.
*/
private String[] trustedPackages;
/**
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
private StandardHeaders standardHeaders = StandardHeaders.none;
/**
* The name of a bean that implements RecordMessageConverter.
*/
private String converterBeanName;
/**
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
private long idleEventInterval = 30_000;
/**
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
private boolean destinationIsPattern;
/**
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Timeout used for polling in pollable consumers.
*/
private long pollTimeout = org.springframework.kafka.listener.ConsumerProperties.DEFAULT_POLL_TIMEOUT;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* @return if each record needs to be acknowledged.
*
* When true the offset is committed after each record, otherwise the offsets for the complete set of records
* received from the poll() are committed after all records have been processed.
*/
public boolean isAckEachRecord() {
return this.ackEachRecord;
}
public void setAckEachRecord(boolean ackEachRecord) {
this.ackEachRecord = ackEachRecord;
}
/**
* @return is autocommit offset enabled
*
* Whether to autocommit offsets when a message has been processed.
* If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header
* is present in the inbound message. Applications may use this header for acknowledging messages.
*/
public boolean isAutoCommitOffset() {
return this.autoCommitOffset;
}
public void setAutoCommitOffset(boolean autoCommitOffset) {
this.autoCommitOffset = autoCommitOffset;
}
/**
* @return start offset
*
* The starting offset for new groups. Allowed values: earliest and latest.
*/
public StartOffset getStartOffset() {
return this.startOffset;
}
public void setStartOffset(StartOffset startOffset) {
this.startOffset = startOffset;
}
/**
* @return if resetting offset is enabled
*
* Whether to reset offsets on the consumer to the value provided by startOffset.
* Must be false if a KafkaRebalanceListener is provided.
*/
public boolean isResetOffsets() {
return this.resetOffsets;
}
public void setResetOffsets(boolean resetOffsets) {
this.resetOffsets = resetOffsets;
}
/**
* @return is DLQ enabled.
*
* When set to true, it enables DLQ behavior for the consumer.
* By default, messages that result in errors are forwarded to a topic named error.name-of-destination.name-of-group.
* The DLQ topic name can be configurable by setting the dlqName property.
*/
public boolean isEnableDlq() {
return this.enableDlq;
}
public void setEnableDlq(boolean enableDlq) {
this.enableDlq = enableDlq;
}
/**
* @return is autocommit on error
*
* Effective only if autoCommitOffset is set to true.
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
* If set to true, it always auto-commits (if auto-commit is enabled).
* If not set (the default), it effectively has the same value as enableDlq,
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
*/
public Boolean getAutoCommitOnError() {
return this.autoCommitOnError;
}
public void setAutoCommitOnError(Boolean autoCommitOnError) {
this.autoCommitOnError = autoCommitOnError;
}
/**
* No longer used.
* @return the interval.
* @deprecated No longer used by the binder
*/
@Deprecated
public int getRecoveryInterval() {
return this.recoveryInterval;
}
/**
* No longer used.
* @param recoveryInterval the interval.
* @deprecated No longer needed by the binder
*/
@Deprecated
public void setRecoveryInterval(int recoveryInterval) {
this.recoveryInterval = recoveryInterval;
}
/**
* @return is auto rebalance enabled
*
* When true, topic partitions is automatically rebalanced between the members of a consumer group.
* When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex.
*/
public boolean isAutoRebalanceEnabled() {
return this.autoRebalanceEnabled;
}
public void setAutoRebalanceEnabled(boolean autoRebalanceEnabled) {
this.autoRebalanceEnabled = autoRebalanceEnabled;
}
/**
* @return a map of configuration
*
* Map with a key/value pair containing generic Kafka consumer properties.
* In addition to having Kafka consumer properties, other configuration properties can be passed here.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
/**
* @return dlq name
*
* The name of the DLQ topic to receive the error messages.
*/
public String getDlqName() {
return this.dlqName;
}
public void setDlqName(String dlqName) {
this.dlqName = dlqName;
}
/**
* @return number of partitions on the DLQ topic
*
* Number of partitions to use on the DLQ.
*/
public Integer getDlqPartitions() {
return this.dlqPartitions;
}
public void setDlqPartitions(Integer dlqPartitions) {
this.dlqPartitions = dlqPartitions;
}
/**
* @return trusted packages
*
* List of trusted packages to provide the header mapper.
*/
public String[] getTrustedPackages() {
return this.trustedPackages;
}
public void setTrustedPackages(String[] trustedPackages) {
this.trustedPackages = trustedPackages;
}
/**
* @return dlq producer properties
*
* Using this, DLQ-specific producer properties can be set.
* All the properties available through kafka producer properties can be set through this property.
*/
public KafkaProducerProperties getDlqProducerProperties() {
return this.dlqProducerProperties;
}
public void setDlqProducerProperties(KafkaProducerProperties dlqProducerProperties) {
this.dlqProducerProperties = dlqProducerProperties;
}
/**
* @return standard headers
*
* Indicates which standard headers are populated by the inbound channel adapter.
* Allowed values: none, id, timestamp, or both.
*/
public StandardHeaders getStandardHeaders() {
return this.standardHeaders;
}
public void setStandardHeaders(StandardHeaders standardHeaders) {
this.standardHeaders = standardHeaders;
}
/**
* @return converter bean name
*
* The name of a bean that implements RecordMessageConverter.
*/
public String getConverterBeanName() {
return this.converterBeanName;
}
public void setConverterBeanName(String converterBeanName) {
this.converterBeanName = converterBeanName;
}
/**
* @return idle event interval
*
* The interval, in milliseconds, between events indicating that no messages have recently been received.
*/
public long getIdleEventInterval() {
return this.idleEventInterval;
}
public void setIdleEventInterval(long idleEventInterval) {
this.idleEventInterval = idleEventInterval;
}
/**
* @return is destination given through a pattern
*
* When true, the destination is treated as a regular expression Pattern used to match topic names by the broker.
*/
public boolean isDestinationIsPattern() {
return this.destinationIsPattern;
}
public void setDestinationIsPattern(boolean destinationIsPattern) {
this.destinationIsPattern = destinationIsPattern;
}
/**
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
/**
* @return timeout in pollable consumers
*
* Timeout used for polling in pollable consumers.
*/
public long getPollTimeout() {
return this.pollTimeout;
}
public void setPollTimeout(long pollTimeout) {
this.pollTimeout = pollTimeout;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
}

View File

@@ -0,0 +1,55 @@
/*
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.cloud.stream.binder.AbstractExtendedBindingProperties;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
/**
* Kafka specific extended binding properties class that extends from
* {@link AbstractExtendedBindingProperties}.
*
* @author Marius Bogoevici
* @author Gary Russell
* @author Soby Chacko
* @author Oleg Zhurakousky
*/
@ConfigurationProperties("spring.cloud.stream.kafka")
public class KafkaExtendedBindingProperties extends
AbstractExtendedBindingProperties<KafkaConsumerProperties, KafkaProducerProperties, KafkaBindingProperties> {
private static final String DEFAULTS_PREFIX = "spring.cloud.stream.kafka.default";
@Override
public String getDefaultsPrefix() {
return DEFAULTS_PREFIX;
}
@Override
public Map<String, KafkaBindingProperties> getBindings() {
return this.doGetBindings();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return KafkaBindingProperties.class;
}
}

View File

@@ -0,0 +1,298 @@
/*
* Copyright 2016-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.Map;
import javax.validation.constraints.NotNull;
import org.springframework.expression.Expression;
/**
* Extended producer properties for Kafka binder.
*
* @author Marius Bogoevici
* @author Henryk Konsek
* @author Gary Russell
* @author Aldo Sinanaj
*/
public class KafkaProducerProperties {
/**
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
private int bufferSize = 16384;
/**
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
private CompressionType compressionType = CompressionType.none;
/**
* Whether the producer is synchronous.
*/
private boolean sync;
/**
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
private Expression sendTimeoutExpression;
/**
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
private int batchTimeout;
/**
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
private Expression messageKeyExpression;
/**
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
private String[] headerPatterns;
/**
* Map with a key/value pair containing generic Kafka producer properties.
*/
private Map<String, String> configuration = new HashMap<>();
/**
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
private KafkaTopicProperties topic = new KafkaTopicProperties();
/**
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
private boolean useTopicHeader;
/**
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
private String recordMetadataChannel;
/**
* Transaction manager bean name - overrides the binder's transaction configuration.
*/
private String transactionManager;
/**
* @return buffer size
*
* Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.
*/
public int getBufferSize() {
return this.bufferSize;
}
public void setBufferSize(int bufferSize) {
this.bufferSize = bufferSize;
}
/**
* @return compression type {@link CompressionType}
*
* Set the compression.type producer property. Supported values are none, gzip, snappy and lz4.
* See {@link CompressionType} for more details.
*/
@NotNull
public CompressionType getCompressionType() {
return this.compressionType;
}
public void setCompressionType(CompressionType compressionType) {
this.compressionType = compressionType;
}
/**
* @return if synchronous sending is enabled
*
* Whether the producer is synchronous.
*/
public boolean isSync() {
return this.sync;
}
public void setSync(boolean sync) {
this.sync = sync;
}
/**
* @return timeout expression for send
*
* A SpEL expression evaluated against the outgoing message used to evaluate the time to wait
* for ack when synchronous publish is enabled.
*/
public Expression getSendTimeoutExpression() {
return this.sendTimeoutExpression;
}
public void setSendTimeoutExpression(Expression sendTimeoutExpression) {
this.sendTimeoutExpression = sendTimeoutExpression;
}
/**
* @return batch timeout
*
* How long the producer waits to allow more messages to accumulate in the same batch before sending the messages.
*/
public int getBatchTimeout() {
return this.batchTimeout;
}
public void setBatchTimeout(int batchTimeout) {
this.batchTimeout = batchTimeout;
}
/**
* @return message key expression
*
* A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message.
*/
public Expression getMessageKeyExpression() {
return this.messageKeyExpression;
}
public void setMessageKeyExpression(Expression messageKeyExpression) {
this.messageKeyExpression = messageKeyExpression;
}
/**
* @return header patterns
*
* A comma-delimited list of simple patterns to match Spring messaging headers
* to be mapped to the Kafka Headers in the ProducerRecord.
*/
public String[] getHeaderPatterns() {
return this.headerPatterns;
}
public void setHeaderPatterns(String[] headerPatterns) {
this.headerPatterns = headerPatterns;
}
/**
* @return map of configuration
*
* Map with a key/value pair containing generic Kafka producer properties.
*/
public Map<String, String> getConfiguration() {
return this.configuration;
}
public void setConfiguration(Map<String, String> configuration) {
this.configuration = configuration;
}
/**
* @return topic properties
*
* Various topic level properties. @see {@link KafkaTopicProperties} for more details.
*/
public KafkaTopicProperties getTopic() {
return this.topic;
}
public void setTopic(KafkaTopicProperties topic) {
this.topic = topic;
}
/**
* @return if using topic header
*
* Set to true to override the default binding destination (topic name) with the value of the
* KafkaHeaders.TOPIC message header in the outbound message. If the header is not present,
* the default binding destination is used.
*/
public boolean isUseTopicHeader() {
return this.useTopicHeader;
}
public void setUseTopicHeader(boolean useTopicHeader) {
this.useTopicHeader = useTopicHeader;
}
/**
* @return record metadata channel
*
* The bean name of a MessageChannel to which successful send results should be sent;
* the bean must exist in the application context.
*/
public String getRecordMetadataChannel() {
return this.recordMetadataChannel;
}
public void setRecordMetadataChannel(String recordMetadataChannel) {
this.recordMetadataChannel = recordMetadataChannel;
}
/**
* @return the transaction manager bean name.
*
* Transaction manager bean name (must be {@code KafkaAwareTransactionManager}.
*/
public String getTransactionManager() {
return this.transactionManager;
}
public void setTransactionManager(String transactionManager) {
this.transactionManager = transactionManager;
}
/**
* Enumeration for compression types.
*/
public enum CompressionType {
/**
* No compression.
*/
none,
/**
* gzip based compression.
*/
gzip,
/**
* snappy based compression.
*/
snappy,
/**
* lz4 compression.
*/
lz4,
// /** // TODO: uncomment and fix docs when kafka-clients 2.1.0 or newer is the
// default
// * zstd compression
// */
// zstd
}
}

View File

@@ -0,0 +1,62 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Properties for configuring topics.
*
* @author Aldo Sinanaj
* @since 2.2
*
*/
public class KafkaTopicProperties {
private Short replicationFactor;
private Map<Integer, List<Integer>> replicasAssignments = new HashMap<>();
private Map<String, String> properties = new HashMap<>();
public Short getReplicationFactor() {
return replicationFactor;
}
public void setReplicationFactor(Short replicationFactor) {
this.replicationFactor = replicationFactor;
}
public Map<Integer, List<Integer>> getReplicasAssignments() {
return replicasAssignments;
}
public void setReplicasAssignments(Map<Integer, List<Integer>> replicasAssignments) {
this.replicasAssignments = replicasAssignments;
}
public Map<String, String> getProperties() {
return properties;
}
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
}

View File

@@ -0,0 +1,598 @@
/*
* Copyright 2014-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.CreatePartitionsResult;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.DescribeTopicsResult;
import org.apache.kafka.clients.admin.ListTopicsResult;
import org.apache.kafka.clients.admin.NewPartitions;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.errors.TopicExistsException;
import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.BinderException;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaProducerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaTopicProperties;
import org.springframework.cloud.stream.binder.kafka.utils.KafkaTopicUtils;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.cloud.stream.provisioning.ProvisioningException;
import org.springframework.cloud.stream.provisioning.ProvisioningProvider;
import org.springframework.retry.RetryOperations;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* Kafka implementation for {@link ProvisioningProvider}.
*
* @author Soby Chacko
* @author Gary Russell
* @author Ilayaperumal Gopinathan
* @author Simon Flandergan
* @author Oleg Zhurakousky
* @author Aldo Sinanaj
*/
public class KafkaTopicProvisioner implements
// @checkstyle:off
ProvisioningProvider<ExtendedConsumerProperties<KafkaConsumerProperties>, ExtendedProducerProperties<KafkaProducerProperties>>,
// @checkstyle:on
InitializingBean {
private static final Log logger = LogFactory.getLog(KafkaTopicProvisioner.class);
private static final int DEFAULT_OPERATION_TIMEOUT = 30;
private final KafkaBinderConfigurationProperties configurationProperties;
private final int operationTimeout = DEFAULT_OPERATION_TIMEOUT;
private final Map<String, Object> adminClientProperties;
private RetryOperations metadataRetryOperations;
/**
* Create an instance.
* @param kafkaBinderConfigurationProperties the binder configuration properties.
* @param kafkaProperties the boot Kafka properties used to build the
* {@link AdminClient}.
*/
public KafkaTopicProvisioner(
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
kafkaBinderConfigurationProperties);
}
/**
* Mutator for metadata retry operations.
* @param metadataRetryOperations the retry configuration
*/
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
@Override
public void afterPropertiesSet() {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100);
backOffPolicy.setMultiplier(2);
backOffPolicy.setMaxInterval(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
@Override
public ProducerDestination provisionProducerDestination(final String name,
ExtendedProducerProperties<KafkaProducerProperties> properties) {
if (logger.isInfoEnabled()) {
logger.info("Using kafka topic for outbound: " + name);
}
KafkaTopicUtils.validateTopicName(name);
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
createTopic(adminClient, name, properties.getPartitionCount(), false,
properties.getExtension().getTopic());
int partitions = 0;
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = null;
try {
topicDescriptions = all.get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception ex) {
throw new ProvisioningException(
"Problems encountered with partitions finding", ex);
}
TopicDescription topicDescription = topicDescriptions.get(name);
partitions = topicDescription.partitions().size();
}
return new KafkaProducerDestination(name, partitions);
}
}
@Override
public ConsumerDestination provisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (!properties.isMultiplex()) {
return doProvisionConsumerDestination(name, group, properties);
}
else {
String[] destinations = StringUtils.commaDelimitedListToStringArray(name);
for (String destination : destinations) {
doProvisionConsumerDestination(destination.trim(), group, properties);
}
return new KafkaConsumerDestination(name);
}
}
private ConsumerDestination doProvisionConsumerDestination(final String name,
final String group,
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
if (properties.getExtension().isDestinationIsPattern()) {
Assert.isTrue(!properties.getExtension().isEnableDlq(),
"enableDLQ is not allowed when listening to topic patterns");
if (logger.isDebugEnabled()) {
logger.debug("Listening to a topic pattern - " + name
+ " - no provisioning performed");
}
return new KafkaConsumerDestination(name);
}
KafkaTopicUtils.validateTopicName(name);
boolean anonymous = !StringUtils.hasText(group);
Assert.isTrue(!anonymous || !properties.getExtension().isEnableDlq(),
"DLQ support is not available for anonymous subscriptions");
if (properties.getInstanceCount() == 0) {
throw new IllegalArgumentException("Instance count cannot be zero");
}
int partitionCount = properties.getInstanceCount() * properties.getConcurrency();
ConsumerDestination consumerDestination = new KafkaConsumerDestination(name);
try (AdminClient adminClient = createAdminClient()) {
createTopic(adminClient, name, partitionCount,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
if (this.configurationProperties.isAutoCreateTopics()) {
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(name));
KafkaFuture<Map<String, TopicDescription>> all = describeTopicsResult
.all();
try {
Map<String, TopicDescription> topicDescriptions = all
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(name);
int partitions = topicDescription.partitions().size();
consumerDestination = createDlqIfNeedBe(adminClient, name, group,
properties, anonymous, partitions);
if (consumerDestination == null) {
consumerDestination = new KafkaConsumerDestination(name,
partitions);
}
}
catch (Exception ex) {
throw new ProvisioningException("provisioning exception", ex);
}
}
}
return consumerDestination;
}
AdminClient createAdminClient() {
return AdminClient.create(this.adminClientProperties);
}
/**
* In general, binder properties supersede boot kafka properties. The one exception is
* the bootstrap servers. In that case, we should only override the boot properties if
* (there is a binder property AND it is a non-default value) OR (if there is no boot
* property); this is needed because the binder property never returns a null value.
* @param adminProps the admin properties to normalize.
* @param bootProps the boot kafka properties.
* @param binderProps the binder kafka properties.
*/
public static void normalalizeBootPropsWithBinder(Map<String, Object> adminProps,
KafkaProperties bootProps, KafkaBinderConfigurationProperties binderProps) {
// First deal with the outlier
String kafkaConnectionString = binderProps.getKafkaConnectionString();
if (ObjectUtils
.isEmpty(adminProps.get(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG))
|| !kafkaConnectionString
.equals(binderProps.getDefaultKafkaConnectionString())) {
adminProps.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
// Now override any boot values with binder values
Map<String, String> binderProperties = binderProps.getConfiguration();
Set<String> adminConfigNames = AdminClientConfig.configNames();
binderProperties.forEach((key, value) -> {
if (key.equals(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)) {
throw new IllegalStateException(
"Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
if (adminConfigNames.contains(key)) {
Object replaced = adminProps.put(key, value);
if (replaced != null && KafkaTopicProvisioner.logger.isDebugEnabled()) {
KafkaTopicProvisioner.logger.debug("Overrode boot property: [" + key + "], from: ["
+ replaced + "] to: [" + value + "]");
}
}
});
}
private ConsumerDestination createDlqIfNeedBe(AdminClient adminClient, String name,
String group, ExtendedConsumerProperties<KafkaConsumerProperties> properties,
boolean anonymous, int partitions) {
if (properties.getExtension().isEnableDlq() && !anonymous) {
String dlqTopic = StringUtils.hasText(properties.getExtension().getDlqName())
? properties.getExtension().getDlqName()
: "error." + name + "." + group;
int dlqPartitions = properties.getExtension().getDlqPartitions() == null
? partitions
: properties.getExtension().getDlqPartitions();
try {
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
properties.getExtension().isAutoRebalanceEnabled(),
properties.getExtension().getTopic());
}
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
throw new ProvisioningException("provisioning exception", throwable);
}
}
return new KafkaConsumerDestination(name, partitions, dlqTopic);
}
return null;
}
private void createTopic(AdminClient adminClient, String name, int partitionCount,
boolean tolerateLowerPartitionsOnBroker, KafkaTopicProperties properties) {
try {
createTopicIfNecessary(adminClient, name, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
// TODO: Remove catching Throwable. See this thread:
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
catch (Throwable throwable) {
if (throwable instanceof Error) {
throw (Error) throwable;
}
else {
// TODO:
// https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/514#discussion_r241075940
throw new ProvisioningException("Provisioning exception", throwable);
}
}
}
private void createTopicIfNecessary(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties properties) throws Throwable {
if (this.configurationProperties.isAutoCreateTopics()) {
createTopicAndPartitions(adminClient, topicName, partitionCount,
tolerateLowerPartitionsOnBroker, properties);
}
else {
logger.info("Auto creation of topics is disabled.");
}
}
/**
* Creates a Kafka topic if needed, or try to increase its partition count to the
* desired number.
* @param adminClient kafka admin client
* @param topicName topic name
* @param partitionCount partition count
* @param tolerateLowerPartitionsOnBroker whether lower partitions count on broker is
* tolerated ot not
* @param topicProperties kafka topic properties
* @throws Throwable from topic creation
*/
private void createTopicAndPartitions(AdminClient adminClient, final String topicName,
final int partitionCount, boolean tolerateLowerPartitionsOnBroker,
KafkaTopicProperties topicProperties) throws Throwable {
ListTopicsResult listTopicsResult = adminClient.listTopics();
KafkaFuture<Set<String>> namesFutures = listTopicsResult.names();
Set<String> names = namesFutures.get(this.operationTimeout, TimeUnit.SECONDS);
if (names.contains(topicName)) {
// only consider minPartitionCount for resizing if autoAddPartitions is true
int effectivePartitionCount = this.configurationProperties
.isAutoAddPartitions()
? Math.max(
this.configurationProperties.getMinPartitionCount(),
partitionCount)
: partitionCount;
DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
KafkaFuture<Map<String, TopicDescription>> topicDescriptionsFuture = describeTopicsResult
.all();
Map<String, TopicDescription> topicDescriptions = topicDescriptionsFuture
.get(this.operationTimeout, TimeUnit.SECONDS);
TopicDescription topicDescription = topicDescriptions.get(topicName);
int partitionSize = topicDescription.partitions().size();
if (partitionSize < effectivePartitionCount) {
if (this.configurationProperties.isAutoAddPartitions()) {
CreatePartitionsResult partitions = adminClient
.createPartitions(Collections.singletonMap(topicName,
NewPartitions.increaseTo(effectivePartitionCount)));
partitions.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
else if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (effectivePartitionCount - partitionSize)
+ " idle consumers");
}
else {
throw new ProvisioningException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead."
+ "Consider either increasing the partition count of the topic or enabling "
+ "`autoAddPartitions`");
}
}
}
else {
// always consider minPartitionCount for topic creation
final int effectivePartitionCount = Math.max(
this.configurationProperties.getMinPartitionCount(), partitionCount);
this.metadataRetryOperations.execute((context) -> {
NewTopic newTopic;
Map<Integer, List<Integer>> replicasAssignments = topicProperties
.getReplicasAssignments();
if (replicasAssignments != null && replicasAssignments.size() > 0) {
newTopic = new NewTopic(topicName,
topicProperties.getReplicasAssignments());
}
else {
newTopic = new NewTopic(topicName, effectivePartitionCount,
topicProperties.getReplicationFactor() != null
? topicProperties.getReplicationFactor()
: this.configurationProperties
.getReplicationFactor());
}
if (topicProperties.getProperties().size() > 0) {
newTopic.configs(topicProperties.getProperties());
}
CreateTopicsResult createTopicsResult = adminClient
.createTopics(Collections.singletonList(newTopic));
try {
createTopicsResult.all().get(this.operationTimeout, TimeUnit.SECONDS);
}
catch (Exception ex) {
if (ex instanceof ExecutionException) {
if (ex.getCause() instanceof TopicExistsException) {
if (logger.isWarnEnabled()) {
logger.warn("Attempt to create topic: " + topicName
+ ". Topic already exists.");
}
}
else {
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
else {
logger.error("Failed to create topics", ex.getCause());
throw ex.getCause();
}
}
return null;
});
}
}
/**
* Check that the topic has the expected number of partitions and return the partition information.
* @param partitionCount the expected count.
* @param tolerateLowerPartitionsOnBroker if false, throw an exception if there are not enough partitions.
* @param callable a Callable that will provide the partition information.
* @param topicName the topic./
* @return the partition information.
*/
public Collection<PartitionInfo> getPartitionsForTopic(final int partitionCount,
final boolean tolerateLowerPartitionsOnBroker,
final Callable<Collection<PartitionInfo>> callable, final String topicName) {
try {
return this.metadataRetryOperations.execute((context) -> {
Collection<PartitionInfo> partitions = Collections.emptyList();
try {
// This call may return null or throw an exception.
partitions = callable.call();
}
catch (Exception ex) {
// The above call can potentially throw exceptions such as timeout. If
// we can determine
// that the exception was due to an unknown topic on the broker, just
// simply rethrow that.
if (ex instanceof UnknownTopicOrPartitionException) {
throw ex;
}
logger.error("Failed to obtain partition information", ex);
}
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
if (CollectionUtils.isEmpty(partitions)) {
try (AdminClient adminClient = AdminClient
.create(this.adminClientProperties)) {
final DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
describeTopicsResult.all().get();
}
catch (ExecutionException ex) {
if (ex.getCause() instanceof UnknownTopicOrPartitionException) {
throw (UnknownTopicOrPartitionException) ex.getCause();
}
else {
logger.warn("No partitions have been retrieved for the topic "
+ "(" + topicName
+ "). This will affect the health check.");
}
}
}
// do a sanity check on the partition set
int partitionSize = CollectionUtils.isEmpty(partitions) ? 0 : partitions.size();
if (partitionSize < partitionCount) {
if (tolerateLowerPartitionsOnBroker) {
logger.warn("The number of expected partitions was: "
+ partitionCount + ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead." + "There will be "
+ (partitionCount - partitionSize) + " idle consumers");
}
else {
throw new IllegalStateException(
"The number of expected partitions was: " + partitionCount
+ ", but " + partitionSize
+ (partitionSize > 1 ? " have " : " has ")
+ "been found instead");
}
}
return partitions;
});
}
catch (Exception ex) {
logger.error("Cannot initialize Binder", ex);
throw new BinderException("Cannot initialize binder:", ex);
}
}
private static final class KafkaProducerDestination implements ProducerDestination {
private final String producerDestinationName;
private final int partitions;
KafkaProducerDestination(String destinationName, Integer partitions) {
this.producerDestinationName = destinationName;
this.partitions = partitions;
}
@Override
public String getName() {
return this.producerDestinationName;
}
@Override
public String getNameForPartition(int partition) {
return this.producerDestinationName;
}
@Override
public String toString() {
return "KafkaProducerDestination{" + "producerDestinationName='"
+ producerDestinationName + '\'' + ", partitions=" + partitions + '}';
}
}
private static final class KafkaConsumerDestination implements ConsumerDestination {
private final String consumerDestinationName;
private final int partitions;
private final String dlqName;
KafkaConsumerDestination(String consumerDestinationName) {
this(consumerDestinationName, 0, null);
}
KafkaConsumerDestination(String consumerDestinationName, int partitions) {
this(consumerDestinationName, partitions, null);
}
KafkaConsumerDestination(String consumerDestinationName, Integer partitions,
String dlqName) {
this.consumerDestinationName = consumerDestinationName;
this.partitions = partitions;
this.dlqName = dlqName;
}
@Override
public String getName() {
return this.consumerDestinationName;
}
@Override
public String toString() {
return "KafkaConsumerDestination{" + "consumerDestinationName='"
+ consumerDestinationName + '\'' + ", partitions=" + partitions
+ ", dlqName='" + dlqName + '\'' + '}';
}
}
}

View File

@@ -0,0 +1,76 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.utils;
import org.apache.commons.logging.Log;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.lang.Nullable;
/**
* A TriFunction that takes a consumer group, consumer record, and throwable and returns
* which partition to publish to the dead letter topic. Returning {@code null} means Kafka
* will choose the partition.
*
* @author Gary Russell
* @since 3.0
*
*/
@FunctionalInterface
public interface DlqPartitionFunction {
/**
* Returns the same partition as the original recor.
*/
DlqPartitionFunction ORIGINAL_PARTITION = (group, rec, ex) -> rec.partition();
/**
* Returns 0.
*/
DlqPartitionFunction PARTITION_ZERO = (group, rec, ex) -> 0;
/**
* Apply the function.
* @param group the consumer group.
* @param record the consumer record.
* @param throwable the exception.
* @return the DLQ partition, or null.
*/
@Nullable
Integer apply(String group, ConsumerRecord<?, ?> record, Throwable throwable);
/**
* Determine the fallback function to use based on the dlq partition count if no
* {@link DlqPartitionFunction} bean is provided.
* @param dlqPartitions the partition count.
* @param logger the logger.
* @return the fallback.
*/
static DlqPartitionFunction determineFallbackFunction(@Nullable Integer dlqPartitions, Log logger) {
if (dlqPartitions == null) {
return ORIGINAL_PARTITION;
}
else if (dlqPartitions > 1) {
logger.error("'dlqPartitions' is > 1 but a custom DlqPartitionFunction bean is not provided");
return ORIGINAL_PARTITION;
}
else {
return PARTITION_ZERO;
}
}
}

View File

@@ -1,11 +1,11 @@
/*
* Copyright 2016 the original author or authors.
* Copyright 2016-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,11 +14,13 @@
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
package org.springframework.cloud.stream.binder.kafka.utils;
import java.io.UnsupportedEncodingException;
/**
* Utility methods releated to Kafka topics.
*
* @author Soby Chacko
*/
public final class KafkaTopicUtils {
@@ -28,21 +30,25 @@ public final class KafkaTopicUtils {
}
/**
* Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
* Validate topic name. Allowed chars are ASCII alphanumerics, '.', '_' and '-'.
* @param topicName name of the topic
*/
public static void validateTopicName(String topicName) {
try {
byte[] utf8 = topicName.getBytes("UTF-8");
for (byte b : utf8) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z') || (b >= '0') && (b <= '9') || (b == '.')
|| (b == '-') || (b == '_'))) {
if (!((b >= 'a') && (b <= 'z') || (b >= 'A') && (b <= 'Z')
|| (b >= '0') && (b <= '9') || (b == '.') || (b == '-')
|| (b == '_'))) {
throw new IllegalArgumentException(
"Topic name can only have ASCII alphanumerics, '.', '_' and '-'");
"Topic name can only have ASCII alphanumerics, '.', '_' and '-', but was: '"
+ topicName + "'");
}
}
}
catch (UnsupportedEncodingException e) {
throw new AssertionError(e); // Can't happen
catch (UnsupportedEncodingException ex) {
throw new AssertionError(ex); // Can't happen
}
}
}

View File

@@ -0,0 +1,108 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaBinderConfigurationPropertiesTest {
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setGroupId("group1");
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setEnableAutoCommit(true);
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConsumerProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConsumerProps() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
}

View File

@@ -0,0 +1,118 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.provisioning;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.common.config.SslConfigs;
import org.apache.kafka.common.network.SslChannelBuilder;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.core.io.ClassPathResource;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
/**
* @author Gary Russell
* @since 2.0
*
*/
public class KafkaTopicProvisionerTests {
@SuppressWarnings("rawtypes")
@Test
public void bootPropertiesOverriddenExceptServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:1234"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:9092");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@SuppressWarnings("rawtypes")
@Test
public void bootPropertiesOverriddenIncludingServers() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
bootConfig.getProperties().put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"PLAINTEXT");
bootConfig.setBootstrapServers(Collections.singletonList("localhost:9092"));
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG,
"SSL");
ClassPathResource ts = new ClassPathResource("test.truststore.ks");
binderConfig.getConfiguration().put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
ts.getFile().getAbsolutePath());
binderConfig.setBrokers("localhost:1234");
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
bootConfig);
AdminClient adminClient = provisioner.createAdminClient();
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
Map configs = KafkaTestUtils.getPropertyValue(adminClient,
"client.selector.channelBuilder.configs", Map.class);
assertThat(
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
.isEqualTo("localhost:1234");
adminClient.close();
}
@Test
public void brokersInvalid() throws Exception {
KafkaProperties bootConfig = new KafkaProperties();
KafkaBinderConfigurationProperties binderConfig = new KafkaBinderConfigurationProperties(
bootConfig);
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
"localhost:1234");
try {
new KafkaTopicProvisioner(binderConfig, bootConfig);
fail("Expected illegal state");
}
catch (IllegalStateException e) {
assertThat(e.getMessage()).isEqualTo(
"Set binder bootstrap servers via the 'brokers' property, not 'configuration'");
}
}
}

View File

@@ -1,337 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>1.1.0.M1</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-docs</artifactId>
<name>spring-cloud-stream-binder-kafka-docs</name>
<description>Spring Cloud Stream Kafka Binder Docs</description>
<properties>
<main.basedir>${basedir}/..</main.basedir>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
<version>1.1.0.M1</version>
</dependency>
</dependencies>
<profiles>
<profile>
<id>full</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>xml-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<goals>
<goal>transform</goal>
</goals>
</execution>
</executions>
<configuration>
<transformationSets>
<transformationSet>
<dir>${project.build.directory}/external-resources</dir>
<stylesheet>src/main/xslt/dependencyVersions.xsl</stylesheet>
<fileMappers>
<fileMapper implementation="org.codehaus.plexus.components.io.filemappers.FileExtensionMapper">
<targetExtension>.adoc</targetExtension>
</fileMapper>
</fileMappers>
<outputDir>${project.build.directory}/generated-resources</outputDir>
</transformationSet>
</transformationSets>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<executions>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>jar</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<includeDependencySources>true</includeDependencySources>
<dependencySourceIncludes>
<dependencySourceInclude>${project.groupId}:*</dependencySourceInclude>
</dependencySourceIncludes>
<attach>false</attach>
<quiet>true</quiet>
<stylesheetfile>${basedir}/src/main/javadoc/spring-javadoc.css</stylesheetfile>
<links>
<link>http://docs.spring.io/spring-framework/docs/${spring.version}/javadoc-api/</link>
<link>http://docs.spring.io/spring-shell/docs/current/api/</link>
</links>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.0</version>
<executions>
<execution>
<id>generate-docbook</id>
<phase>generate-resources</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<sourceDocumentName>index.adoc</sourceDocumentName>
<backend>docbook5</backend>
<doctype>book</doctype>
<attributes>
<docinfo>true</docinfo>
<spring-cloud-stream-binder-kafka-version>${project.version}</spring-cloud-stream-binder-kafka-version>
<spring-cloud-stream-binder-kafka-docs-version>${project.version}</spring-cloud-stream-binder-kafka-docs-version>
<github-tag>${github-tag}</github-tag>
</attributes>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.agilejava.docbkx</groupId>
<artifactId>docbkx-maven-plugin</artifactId>
<version>2.0.15</version>
<configuration>
<sourceDirectory>${basedir}/target/generated-docs</sourceDirectory>
<includes>index.xml</includes>
<xincludeSupported>true</xincludeSupported>
<chunkedOutput>false</chunkedOutput>
<foCustomization>${basedir}/src/main/docbook/xsl/pdf.xsl</foCustomization>
<useExtensions>1</useExtensions>
<highlightSource>1</highlightSource>
<highlightXslthlConfig>${basedir}/src/main/docbook/xsl/xslthl-config.xml</highlightXslthlConfig>
</configuration>
<dependencies>
<dependency>
<groupId>net.sf.xslthl</groupId>
<artifactId>xslthl</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>net.sf.docbook</groupId>
<artifactId>docbook-xml</artifactId>
<version>5.0-all</version>
<classifier>resources</classifier>
<type>zip</type>
<scope>runtime</scope>
</dependency>
</dependencies>
<executions>
<execution>
<id>html-single</id>
<goals>
<goal>generate-html</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<htmlCustomization>${basedir}/src/main/docbook/xsl/html-singlepage.xsl</htmlCustomization>
<targetDirectory>${basedir}/target/docbook/htmlsingle</targetDirectory>
<postProcess>
<copy todir="${basedir}/target/contents/reference/htmlsingle">
<fileset dir="${basedir}/target/docbook/htmlsingle">
<include name="**/*.html" />
</fileset>
</copy>
<copy todir="${basedir}/target/contents/reference/htmlsingle">
<fileset dir="${basedir}/src/main/docbook">
<include name="**/*.css" />
<include name="**/*.png" />
<include name="**/*.gif" />
<include name="**/*.jpg" />
</fileset>
</copy>
<copy todir="${basedir}/target/contents/reference/htmlsingle">
<fileset dir="${basedir}/src/main/asciidoc">
<include name="images/*.css" />
<include name="images/*.png" />
<include name="images/*.gif" />
<include name="images/*.jpg" />
</fileset>
</copy>
</postProcess>
</configuration>
</execution>
<execution>
<id>html</id>
<goals>
<goal>generate-html</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<htmlCustomization>${basedir}/src/main/docbook/xsl/html-multipage.xsl</htmlCustomization>
<targetDirectory>${basedir}/target/docbook/html</targetDirectory>
<chunkedOutput>true</chunkedOutput>
<postProcess>
<copy todir="${basedir}/target/contents/reference/html">
<fileset dir="${basedir}/target/docbook/html">
<include name="**/*.html" />
</fileset>
</copy>
<copy todir="${basedir}/target/contents/reference/html">
<fileset dir="${basedir}/src/main/docbook">
<include name="**/*.css" />
<include name="**/*.png" />
<include name="**/*.gif" />
<include name="**/*.jpg" />
</fileset>
</copy>
<copy todir="${basedir}/target/contents/reference/html">
<fileset dir="${basedir}/src/main/asciidoc">
<include name="images/*.css" />
<include name="images/*.png" />
<include name="images/*.gif" />
<include name="images/*.jpg" />
</fileset>
</copy>
</postProcess>
</configuration>
</execution>
<execution>
<id>pdf</id>
<goals>
<goal>generate-pdf</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<foCustomization>${basedir}/src/main/docbook/xsl/pdf.xsl</foCustomization>
<targetDirectory>${basedir}/target/docbook/pdf</targetDirectory>
<postProcess>
<copy todir="${basedir}/target/contents/reference">
<fileset dir="${basedir}/target/docbook">
<include name="**/*.pdf" />
</fileset>
</copy>
<move file="${basedir}/target/contents/reference/pdf/index.pdf" tofile="${basedir}/target/contents/reference/pdf/spring-cloud-stream-reference.pdf" />
</postProcess>
</configuration>
</execution>
<execution>
<id>epub</id>
<goals>
<goal>generate-epub3</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<epubCustomization>${basedir}/src/main/docbook/xsl/epub.xsl</epubCustomization>
<targetDirectory>${basedir}/target/docbook/epub</targetDirectory>
<postProcess>
<copy todir="${basedir}/target/contents/reference/epub">
<fileset dir="${basedir}/target/docbook">
<include name="**/*.epub" />
</fileset>
</copy>
<move file="${basedir}/target/contents/reference/epub/index.epub" tofile="${basedir}/target/contents/reference/epub/spring-cloud-stream-reference.epub" />
</postProcess>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<dependencies>
<dependency>
<groupId>ant-contrib</groupId>
<artifactId>ant-contrib</artifactId>
<version>1.0b3</version>
<exclusions>
<exclusion>
<groupId>ant</groupId>
<artifactId>ant</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.tigris.antelope</groupId>
<artifactId>antelopetasks</artifactId>
<version>3.2.10</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>package-and-attach-docs-zip</id>
<phase>package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<zip destfile="${project.build.directory}/${project.artifactId}-${project.version}.zip">
<zipfileset src="${project.build.directory}/${project.artifactId}-${project.version}-javadoc.jar" prefix="api" />
<fileset dir="${project.build.directory}/contents" />
</zip>
</target>
</configuration>
</execution>
<execution>
<id>setup-maven-properties</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<taskdef resource="net/sf/antcontrib/antcontrib.properties" />
<taskdef name="stringutil" classname="ise.antelope.tasks.StringUtilTask" />
<var name="version-type" value="${project.version}" />
<propertyregex property="version-type" override="true" input="${version-type}" regexp=".*\.(.*)" replace="\1" />
<propertyregex property="version-type" override="true" input="${version-type}" regexp="(M)\d+" replace="MILESTONE" />
<propertyregex property="version-type" override="true" input="${version-type}" regexp="(RC)\d+" replace="MILESTONE" />
<propertyregex property="version-type" override="true" input="${version-type}" regexp="BUILD-(.*)" replace="SNAPSHOT" />
<stringutil string="${version-type}" property="spring-boot-repo">
<lowercase />
</stringutil>
<var name="github-tag" value="v${project.version}" />
<propertyregex property="github-tag" override="true" input="${github-tag}" regexp=".*SNAPSHOT" replace="master" />
</target>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>attach-zip</id>
<goals>
<goal>attach-artifact</goal>
</goals>
<configuration>
<artifacts>
<artifact>
<file>${project.build.directory}/${project.artifactId}-${project.version}.zip</file>
<type>zip;zip.type=docs;zip.deployed=false</type>
</artifact>
</artifacts>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@@ -1,33 +0,0 @@
[[spring-cloud-stream-binder-kafka-reference]]
= Spring Cloud Stream Kafka Binder Reference Guide
Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Thomas Risberg, Dave Syer, David Turanski, Janne Valkealahti, Benjamin Klein
:doctype: book
:toc:
:toclevels: 4
:source-highlighter: prettify
:numbered:
:icons: font
:hide-uri-scheme:
:spring-cloud-stream-binder-kafka-repo: snapshot
:github-tag: master
:spring-cloud-stream-binder-kafka-docs-version: current
:spring-cloud-stream-binder-kafka-docs: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/{spring-cloud-stream-binder-kafka-docs-version}/reference
:spring-cloud-stream-binder-kafka-docs-current: http://docs.spring.io/spring-cloud-stream-binder-kafka/docs/current-SNAPSHOT/reference/html/
:github-repo: spring-cloud/spring-cloud-stream-binder-kafka
:github-raw: http://raw.github.com/{github-repo}/{github-tag}
:github-code: http://github.com/{github-repo}/tree/{github-tag}
:github-wiki: http://github.com/{github-repo}/wiki
:github-master-code: http://github.com/{github-repo}/tree/master
:sc-ext: java
// ======================================================================================
= Reference Guide
include::overview.adoc[]
= Appendices
[appendix]
include::building.adoc[]
include::contributing.adoc[]
// ======================================================================================

View File

@@ -1,229 +0,0 @@
[partintro]
--
This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder.
It contains information about its design, usage and configuration options, as well as information on how the Stream Cloud Stream concepts map into Apache Kafka specific constructs.
--
== Usage
For using the Apache Kafka binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
----
Alternatively, you can also use the Spring Cloud Stream Kafka Starter.
[source,xml]
----
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
----
== Apache Kafka Binder Overview
A simplified diagram of how the Apache Kafka binder operates can be seen below.
.Kafka Binder
image::kafka-binder.png[width=300,scaledwidth="50%"]
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic.
The consumer group maps directly to the same Apache Kafka concept.
Partitioning also maps directly to Apache Kafka partitions as well.
== Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, refer to the https://github.com/spring-cloud/spring-cloud-stream/blob/master/spring-cloud-stream-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc#configuration-options[core docs].
=== Kafka Binder Properties
spring.cloud.stream.kafka.binder.brokers::
A list of brokers to which the Kafka binder will connect.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultBrokerPort::
`brokers` allows hosts specified with or without port information (e.g., `host1,host2:port2`).
This sets the default port when no port is configured in the broker list.
+
Default: `9092`.
spring.cloud.stream.kafka.binder.zkNodes::
A list of ZooKeeper nodes to which the Kafka binder can connect.
+
Default: `localhost`.
spring.cloud.stream.kafka.binder.defaultZkPort::
`zkNodes` allows hosts specified with or without port information (e.g., `host1,host2:port2`).
This sets the default port when no port is configured in the node list.
+
Default: `2181`.
spring.cloud.stream.kafka.binder.configuration::
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder.
Due to the fact that these properties will be used by both producers and consumers, usage should be restricted to common properties, especially security settings.
+
Default: Empty map.
spring.cloud.stream.kafka.binder.headers::
The list of custom headers that will be transported by the binder.
+
Default: empty.
spring.cloud.stream.kafka.binder.offsetUpdateTimeWindow::
The frequency, in milliseconds, with which offsets are saved.
Ignored if `0`.
+
Default: `10000`.
spring.cloud.stream.kafka.binder.offsetUpdateCount::
The frequency, in number of updates, which which consumed offsets are persisted.
Ignored if `0`.
Mutually exclusive with `offsetUpdateTimeWindow`.
+
Default: `0`.
spring.cloud.stream.kafka.binder.requiredAcks::
The number of required acks on the broker.
+
Default: `1`.
spring.cloud.stream.kafka.binder.minPartitionCount::
Effective only if `autoCreateTopics` or `autoAddPartitions` is set.
The global minimum number of partitions that the binder will configure on topics on which it produces/consumes data.
It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount` * `concurrency` settings of the producer (if either is larger).
+
Default: `1`.
spring.cloud.stream.kafka.binder.replicationFactor::
The replication factor of auto-created topics if `autoCreateTopics` is active.
+
Default: `1`.
spring.cloud.stream.kafka.binder.autoCreateTopics::
If set to `true`, the binder will create new topics automatically.
If set to `false`, the binder will rely on the topics being already configured.
In the latter case, if the topics do not exist, the binder will fail to start.
Of note, this setting is independent of the `auto.topic.create.enable` setting of the broker and it does not influence it: if the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
spring.cloud.stream.kafka.binder.autoAddPartitions::
If set to `true`, the binder will create add new partitions if required.
If set to `false`, the binder will rely on the partition size of the topic being already configured.
If the partition count of the target topic is smaller than the expected value, the binder will fail to start.
+
Default: `false`.
spring.cloud.stream.kafka.binder.socketBufferSize::
Size (in bytes) of the socket buffer to be used by the Kafka consumers.
+
Default: `2097152`.
=== Kafka Consumer Properties
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
autoCommitOffset::
Whether to autocommit offsets when a message has been processed.
If set to `false`, an `Acknowledgment` header will be available in the message headers for late acknowledgment.
+
Default: `true`.
autoCommitOnError::
Effective only if `autoCommitOffset` is set to `true`.
If set to `false` it suppresses auto-commits for messages that result in errors, and will commit only for successful messages, allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
If set to `true`, it will always auto-commit (if auto-commit is enabled).
If not set (default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ, and not committing them otherwise.
+
Default: not set.
recoveryInterval::
The interval between connection recovery attempts, in milliseconds.
+
Default: `5000`.
resetOffsets::
Whether to reset offsets on the consumer to the value provided by `startOffset`.
+
Default: `false`.
startOffset::
The starting offset for new groups, or when `resetOffsets` is `true`.
Allowed values: `earliest`, `latest`.
+
Default: null (equivalent to `earliest`).
enableDlq::
When set to true, it will send enable DLQ behavior for the consumer.
Messages that result in errors will be forwarded to a topic named `error.<destination>.<group>`.
This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.
+
Default: `false`.
configuration::
Map with a key/value pair containing generic Kafka consumer properties.
+
Default: Empty map.
=== Kafka Producer Properties
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
bufferSize::
Upper limit, in bytes, of how much data the Kafka producer will attempt to batch before sending.
+
Default: `16384`.
sync::
Whether the producer is synchronous.
+
Default: `false`.
batchTimeout::
How long the producer will wait before sending in order to allow more messages to accumulate in the same batch.
(Normally the producer does not wait at all, and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
+
Default: `0`.
configuration::
Map with a key/value pair containing generic Kafka producer properties.
+
Default: Empty map.
[NOTE]
====
The Kafka binder will use the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value will be used.
If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), then the binder will fail to start.
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions will be added.
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` and `partitionCount`), the existing partition count will be used.
====
=== Usage examples
In this section, we illustrate the use of the above properties for specific scenarios.
==== Example: security configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder.
For example, for setting `security.protocol` to `SASL_SSL`, set:
[source]
----
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
----
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the http://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration.
At the time of this release, the JAAS, and (optionally) krb5 file locations must be set for Spring Cloud Stream applications by using system properties.
Here is an example of launching a Spring Cloud Stream application with SASL and Kerberos.
[source]
----
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \\
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \\
--spring.cloud.stream.kafka.binder.zkNodes=secure.zookeeper:2181 \\
--spring.cloud.stream.bindings.input.destination=stream.ticktock \\
--spring.cloud.stream.kafka.binder.clientConfiguration.security.protocol=SASL_PLAINTEXT
----
[NOTE]
====
Exercise caution when using the `autoCreateTopics` and `autoAddPartitions` if using Kerberos.
Usually applications may use principals that do not have administrative rights in Kafka and Zookeeper, and relying on Spring Cloud Stream to create/modify topics may fail.
In secure environments, we strongly recommend creating topics and managing ACLs administratively using Kafka tooling.
====

View File

@@ -1,35 +0,0 @@
/*
code highlight CSS resemblign the Eclipse IDE default color schema
@author Costin Leau
*/
.hl-keyword {
color: #7F0055;
font-weight: bold;
}
.hl-comment {
color: #3F5F5F;
font-style: italic;
}
.hl-multiline-comment {
color: #3F5FBF;
font-style: italic;
}
.hl-tag {
color: #3F7F7F;
}
.hl-attribute {
color: #7F007F;
}
.hl-value {
color: #2A00FF;
}
.hl-string {
color: #2A00FF;
}

View File

@@ -1,9 +0,0 @@
@IMPORT url("manual.css");
body.firstpage {
background: url("../images/background.png") no-repeat center top;
}
div.part h1 {
border-top: none;
}

View File

@@ -1,6 +0,0 @@
@IMPORT url("manual.css");
body {
background: url("../images/background.png") no-repeat center top;
}

View File

@@ -1,344 +0,0 @@
@IMPORT url("highlight.css");
html {
padding: 0pt;
margin: 0pt;
}
body {
color: #333333;
margin: 15px 30px;
font-family: Helvetica, Arial, Freesans, Clean, Sans-serif;
line-height: 1.6;
-webkit-font-smoothing: antialiased;
}
code {
font-size: 16px;
font-family: Consolas, "Liberation Mono", Courier, monospace;
}
:not(a)>code {
color: #6D180B;
}
:not(pre)>code {
background-color: #F2F2F2;
border: 1px solid #CCCCCC;
border-radius: 4px;
padding: 1px 3px 0;
text-shadow: none;
white-space: nowrap;
}
body>*:first-child {
margin-top: 0 !important;
}
div {
margin: 0pt;
}
hr {
border: 1px solid #CCCCCC;
background: #CCCCCC;
}
h1,h2,h3,h4,h5,h6 {
color: #000000;
cursor: text;
font-weight: bold;
margin: 30px 0 10px;
padding: 0;
}
h1,h2,h3 {
margin: 40px 0 10px;
}
h1 {
margin: 70px 0 30px;
padding-top: 20px;
}
div.part h1 {
border-top: 1px dotted #CCCCCC;
}
h1,h1 code {
font-size: 32px;
}
h2,h2 code {
font-size: 24px;
}
h3,h3 code {
font-size: 20px;
}
h4,h1 code,h5,h5 code,h6,h6 code {
font-size: 18px;
}
div.book,div.chapter,div.appendix,div.part,div.preface {
min-width: 300px;
max-width: 1200px;
margin: 0 auto;
}
p.releaseinfo {
font-weight: bold;
margin-bottom: 40px;
margin-top: 40px;
}
div.authorgroup {
line-height: 1;
}
p.copyright {
line-height: 1;
margin-bottom: -5px;
}
.legalnotice p {
font-style: italic;
font-size: 14px;
line-height: 1;
}
div.titlepage+p,div.titlepage+p {
margin-top: 0;
}
pre {
line-height: 1.0;
color: black;
}
a {
color: #4183C4;
text-decoration: none;
}
p {
margin: 15px 0;
text-align: left;
}
ul,ol {
padding-left: 30px;
}
li p {
margin: 0;
}
div.table {
margin: 1em;
padding: 0.5em;
text-align: center;
}
div.table table,div.informaltable table {
display: table;
width: 100%;
}
div.table td {
padding-left: 7px;
padding-right: 7px;
}
.sidebar {
line-height: 1.4;
padding: 0 20px;
background-color: #F8F8F8;
border: 1px solid #CCCCCC;
border-radius: 3px 3px 3px 3px;
}
.sidebar p.title {
color: #6D180B;
}
pre.programlisting,pre.screen {
font-size: 15px;
padding: 6px 10px;
background-color: #F8F8F8;
border: 1px solid #CCCCCC;
border-radius: 3px 3px 3px 3px;
clear: both;
overflow: auto;
line-height: 1.4;
font-family: Consolas, "Liberation Mono", Courier, monospace;
}
table {
border-collapse: collapse;
border-spacing: 0;
border: 1px solid #DDDDDD !important;
border-radius: 4px !important;
border-collapse: separate !important;
line-height: 1.6;
}
table thead {
background: #F5F5F5;
}
table tr {
border: none;
border-bottom: none;
}
table th {
font-weight: bold;
}
table th,table td {
border: none !important;
padding: 6px 13px;
}
table tr:nth-child(2n) {
background-color: #F8F8F8;
}
td p {
margin: 0 0 15px 0;
}
div.table-contents td p {
margin: 0;
}
div.important *,div.note *,div.tip *,div.warning *,div.navheader *,div.navfooter *,div.calloutlist *
{
border: none !important;
background: none !important;
margin: 0;
}
div.important p,div.note p,div.tip p,div.warning p {
color: #6F6F6F;
line-height: 1.6;
}
div.important code,div.note code,div.tip code,div.warning code {
background-color: #F2F2F2 !important;
border: 1px solid #CCCCCC !important;
border-radius: 4px !important;
padding: 1px 3px 0 !important;
text-shadow: none !important;
white-space: nowrap !important;
}
.note th,.tip th,.warning th {
display: none;
}
.note tr:first-child td,.tip tr:first-child td,.warning tr:first-child td
{
border-right: 1px solid #CCCCCC !important;
padding-top: 10px;
}
div.calloutlist p,div.calloutlist td {
padding: 0;
margin: 0;
}
div.calloutlist>table>tbody>tr>td:first-child {
padding-left: 10px;
width: 30px !important;
}
div.important,div.note,div.tip,div.warning {
margin-left: 0px !important;
margin-right: 20px !important;
margin-top: 20px;
margin-bottom: 20px;
padding-top: 10px;
padding-bottom: 10px;
}
div.toc {
line-height: 1.2;
}
dl,dt {
margin-top: 1px;
margin-bottom: 0;
}
div.toc>dl>dt {
font-size: 32px;
font-weight: bold;
margin: 30px 0 10px 0;
display: block;
}
div.toc>dl>dd>dl>dt {
font-size: 24px;
font-weight: bold;
margin: 20px 0 10px 0;
display: block;
}
div.toc>dl>dd>dl>dd>dl>dt {
font-weight: bold;
font-size: 20px;
margin: 10px 0 0 0;
}
tbody.footnotes * {
border: none !important;
}
div.footnote p {
margin: 0;
line-height: 1;
}
div.footnote p sup {
margin-right: 6px;
vertical-align: middle;
}
div.navheader {
border-bottom: 1px solid #CCCCCC;
}
div.navfooter {
border-top: 1px solid #CCCCCC;
}
.title {
margin-left: -1em;
padding-left: 1em;
}
.title>a {
position: absolute;
visibility: hidden;
display: block;
font-size: 0.85em;
margin-top: 0.05em;
margin-left: -1em;
vertical-align: text-top;
color: black;
}
.title>a:before {
content: "\00A7";
}
.title:hover>a,.title>a:hover,.title:hover>a:hover {
visibility: visible;
}
.title:focus>a,.title>a:focus,.title:focus>a:focus {
outline: 0;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 931 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

View File

@@ -1,45 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>
<!-- Extensions -->
<xsl:param name="use.extensions">1</xsl:param>
<xsl:param name="tablecolumns.extension">0</xsl:param>
<xsl:param name="callout.extensions">1</xsl:param>
<!-- Graphics -->
<xsl:param name="admon.graphics" select="1"/>
<xsl:param name="admon.graphics.path">images/</xsl:param>
<xsl:param name="admon.graphics.extension">.png</xsl:param>
<!-- Table of Contents -->
<xsl:param name="generate.toc">book toc,title</xsl:param>
<xsl:param name="toc.section.depth">3</xsl:param>
<!-- Hide revhistory -->
<xsl:template match="d:revhistory" mode="titlepage.mode"/>
</xsl:stylesheet>

View File

@@ -1,31 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl d"
version='1.0'>
<xsl:import href="urn:docbkx:stylesheet"/>
<xsl:import href="common.xsl"/>
</xsl:stylesheet>

View File

@@ -1,73 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version='1.0'>
<xsl:import href="urn:docbkx:stylesheet"/>
<xsl:import href="html.xsl"/>
<xsl:param name="html.stylesheet">css/manual-multipage.css</xsl:param>
<xsl:param name="chunk.section.depth">'5'</xsl:param>
<xsl:param name="use.id.as.filename">'1'</xsl:param>
<!-- Replace chunk-element-content from chunk-common to add firstpage class to body -->
<xsl:template name="chunk-element-content">
<xsl:param name="prev"/>
<xsl:param name="next"/>
<xsl:param name="nav.context"/>
<xsl:param name="content">
<xsl:apply-imports/>
</xsl:param>
<xsl:call-template name="user.preroot"/>
<html>
<xsl:call-template name="html.head">
<xsl:with-param name="prev" select="$prev"/>
<xsl:with-param name="next" select="$next"/>
</xsl:call-template>
<body>
<xsl:if test="count($prev) = 0">
<xsl:attribute name="class">firstpage</xsl:attribute>
</xsl:if>
<xsl:call-template name="body.attributes"/>
<xsl:call-template name="user.header.navigation"/>
<xsl:call-template name="header.navigation">
<xsl:with-param name="prev" select="$prev"/>
<xsl:with-param name="next" select="$next"/>
<xsl:with-param name="nav.context" select="$nav.context"/>
</xsl:call-template>
<xsl:call-template name="user.header.content"/>
<xsl:copy-of select="$content"/>
<xsl:call-template name="user.footer.content"/>
<xsl:call-template name="footer.navigation">
<xsl:with-param name="prev" select="$prev"/>
<xsl:with-param name="next" select="$next"/>
<xsl:with-param name="nav.context" select="$nav.context"/>
</xsl:call-template>
<xsl:call-template name="user.footer.navigation"/>
</body>
</html>
<xsl:value-of select="$chunk.append"/>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,30 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version='1.0'>
<xsl:import href="urn:docbkx:stylesheet"/>
<xsl:import href="html.xsl"/>
<xsl:param name="html.stylesheet">css/manual-singlepage.css</xsl:param>
</xsl:stylesheet>

View File

@@ -1,141 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:d="http://docbook.org/ns/docbook"
exclude-result-prefixes="xslthl"
version='1.0'>
<xsl:import href="urn:docbkx:stylesheet/highlight.xsl"/>
<xsl:import href="common.xsl"/>
<!-- Only use scaling in FO -->
<xsl:param name="ignore.image.scaling">1</xsl:param>
<!-- Use code syntax highlighting -->
<xsl:param name="highlight.source">1</xsl:param>
<!-- Activate Graphics -->
<xsl:param name="callout.graphics" select="1" />
<xsl:param name="callout.defaultcolumn">120</xsl:param>
<xsl:param name="callout.graphics.path">images/callouts/</xsl:param>
<xsl:param name="callout.graphics.extension">.png</xsl:param>
<xsl:param name="table.borders.with.css" select="1"/>
<xsl:param name="html.stylesheet.type">text/css</xsl:param>
<xsl:param name="admonition.title.properties">text-align: left</xsl:param>
<!-- Leave image paths as relative when navigating XInclude -->
<xsl:param name="keep.relative.image.uris" select="1"/>
<!-- Label Chapters and Sections (numbering) -->
<xsl:param name="chapter.autolabel" select="1"/>
<xsl:param name="section.autolabel" select="1"/>
<xsl:param name="section.autolabel.max.depth" select="2"/>
<xsl:param name="section.label.includes.component.label" select="1"/>
<xsl:param name="table.footnote.number.format" select="'1'"/>
<!-- Remove "Chapter" from the Chapter titles... -->
<xsl:param name="local.l10n.xml" select="document('')"/>
<l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0">
<l:l10n language="en">
<l:context name="title-numbered">
<l:template name="chapter" text="%n.&#160;%t"/>
<l:template name="section" text="%n&#160;%t"/>
</l:context>
</l:l10n>
</l:i18n>
<!-- Syntax Highlighting -->
<xsl:template match='xslthl:keyword' mode="xslthl">
<span class="hl-keyword"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:comment' mode="xslthl">
<span class="hl-comment"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:oneline-comment' mode="xslthl">
<span class="hl-comment"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:multiline-comment' mode="xslthl">
<span class="hl-multiline-comment"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:tag' mode="xslthl">
<span class="hl-tag"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:attribute' mode="xslthl">
<span class="hl-attribute"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:value' mode="xslthl">
<span class="hl-value"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<xsl:template match='xslthl:string' mode="xslthl">
<span class="hl-string"><xsl:apply-templates mode="xslthl"/></span>
</xsl:template>
<!-- Custom Title Page -->
<xsl:template match="d:author" mode="titlepage.mode">
<xsl:if test="name(preceding-sibling::*[1]) = 'author'">
<xsl:text>, </xsl:text>
</xsl:if>
<span class="{name(.)}">
<xsl:call-template name="person.name"/>
<xsl:apply-templates mode="titlepage.mode" select="./contrib"/>
</span>
</xsl:template>
<xsl:template match="d:authorgroup" mode="titlepage.mode">
<div class="{name(.)}">
<h2>Authors</h2>
<xsl:apply-templates mode="titlepage.mode"/>
</div>
</xsl:template>
<!-- Title Links -->
<xsl:template name="anchor">
<xsl:param name="node" select="."/>
<xsl:param name="conditional" select="1"/>
<xsl:variable name="id">
<xsl:call-template name="object.id">
<xsl:with-param name="object" select="$node"/>
</xsl:call-template>
</xsl:variable>
<xsl:if test="$conditional = 0 or $node/@id or $node/@xml:id">
<xsl:element name="a">
<xsl:attribute name="name">
<xsl:value-of select="$id"/>
</xsl:attribute>
<xsl:attribute name="href">
<xsl:text>#</xsl:text>
<xsl:value-of select="$id"/>
</xsl:attribute>
</xsl:element>
</xsl:if>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,582 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:d="http://docbook.org/ns/docbook"
xmlns:fo="http://www.w3.org/1999/XSL/Format"
xmlns:xslthl="http://xslthl.sf.net"
xmlns:xlink='http://www.w3.org/1999/xlink'
xmlns:exsl="http://exslt.org/common"
exclude-result-prefixes="exsl xslthl d xlink"
version='1.0'>
<xsl:import href="urn:docbkx:stylesheet"/>
<xsl:import href="urn:docbkx:stylesheet/highlight.xsl"/>
<xsl:import href="common.xsl"/>
<!-- Extensions -->
<xsl:param name="fop1.extensions" select="1"/>
<xsl:param name="paper.type" select="'A4'"/>
<xsl:param name="page.margin.top" select="'1cm'"/>
<xsl:param name="region.before.extent" select="'1cm'"/>
<xsl:param name="body.margin.top" select="'1.5cm'"/>
<xsl:param name="body.margin.bottom" select="'1.5cm'"/>
<xsl:param name="region.after.extent" select="'1cm'"/>
<xsl:param name="page.margin.bottom" select="'1cm'"/>
<xsl:param name="title.margin.left" select="'0cm'"/>
<!-- allow break across pages -->
<xsl:attribute-set name="formal.object.properties">
<xsl:attribute name="keep-together.within-column">auto</xsl:attribute>
</xsl:attribute-set>
<!-- use color links and sensible rendering -->
<xsl:attribute-set name="xref.properties">
<xsl:attribute name="text-decoration">underline</xsl:attribute>
<xsl:attribute name="color">#204060</xsl:attribute>
</xsl:attribute-set>
<xsl:param name="ulink.show" select="0"></xsl:param>
<xsl:param name="ulink.footnotes" select="0"></xsl:param>
<!-- TITLE PAGE -->
<xsl:template name="book.titlepage.recto">
<fo:block>
<fo:table table-layout="fixed" width="175mm">
<fo:table-column column-width="175mm"/>
<fo:table-body>
<fo:table-row>
<fo:table-cell text-align="center">
<fo:block>
<fo:external-graphic src="images/logo.png" width="240px"
height="auto" content-width="scale-to-fit"
content-height="scale-to-fit"
content-type="content-type:image/png" text-align="center"
/>
</fo:block>
<fo:block font-family="Helvetica" font-size="20pt" font-weight="bold" padding="10mm">
<xsl:value-of select="d:info/d:title"/>
</fo:block>
<fo:block font-family="Helvetica" font-size="14pt" padding-before="2mm">
<xsl:value-of select="d:info/d:subtitle"/>
</fo:block>
<fo:block font-family="Helvetica" font-size="14pt" padding="2mm">
<xsl:value-of select="d:info/d:releaseinfo"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
<fo:table-row>
<fo:table-cell text-align="center">
<fo:block font-family="Helvetica" font-size="14pt" padding="5mm">
<xsl:value-of select="d:info/d:pubdate"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
<fo:table-row>
<fo:table-cell text-align="center">
<fo:block font-family="Helvetica" font-size="10pt" padding="10mm">
<xsl:for-each select="d:info/d:authorgroup/d:author">
<xsl:if test="position() > 1">
<xsl:text>, </xsl:text>
</xsl:if>
<xsl:value-of select="."/>
</xsl:for-each>
</fo:block>
<fo:block font-family="Helvetica" font-size="10pt" padding="5mm">
<xsl:value-of select="d:info/d:pubdate"/>
</fo:block>
<fo:block font-family="Helvetica" font-size="10pt" padding="5mm" padding-before="25em">
<xsl:text>Copyright &#xA9; </xsl:text><xsl:value-of select="d:info/d:copyright"/>
</fo:block>
<fo:block font-family="Helvetica" font-size="8pt" padding="1mm">
<xsl:value-of select="d:info/d:legalnotice"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
</fo:table-body>
</fo:table>
</fo:block>
</xsl:template>
<!-- Prevent blank pages in output -->
<xsl:template name="book.titlepage.before.verso">
</xsl:template>
<xsl:template name="book.titlepage.verso">
</xsl:template>
<xsl:template name="book.titlepage.separator">
</xsl:template>
<!-- HEADER -->
<!-- More space in the center header for long text -->
<xsl:attribute-set name="header.content.properties">
<xsl:attribute name="font-family">
<xsl:value-of select="$body.font.family"/>
</xsl:attribute>
<xsl:attribute name="margin-left">-5em</xsl:attribute>
<xsl:attribute name="margin-right">-5em</xsl:attribute>
<xsl:attribute name="font-size">8pt</xsl:attribute>
</xsl:attribute-set>
<xsl:template name="header.content">
<xsl:param name="pageclass" select="''"/>
<xsl:param name="sequence" select="''"/>
<xsl:param name="position" select="''"/>
<xsl:param name="gentext-key" select="''"/>
<xsl:variable name="Version">
<xsl:choose>
<xsl:when test="//d:title">
<xsl:value-of select="//d:title"/><xsl:text> </xsl:text>
</xsl:when>
<xsl:otherwise>
<xsl:text>please define title in your docbook file!</xsl:text>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:choose>
<xsl:when test="$sequence='blank'">
<xsl:choose>
<xsl:when test="$position='center'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:otherwise>
</xsl:otherwise>
</xsl:choose>
</xsl:when>
<xsl:when test="$pageclass='titlepage'">
</xsl:when>
<xsl:when test="$position='center'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:otherwise>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- FOOTER-->
<xsl:attribute-set name="footer.content.properties">
<xsl:attribute name="font-family">
<xsl:value-of select="$body.font.family"/>
</xsl:attribute>
<xsl:attribute name="font-size">8pt</xsl:attribute>
</xsl:attribute-set>
<xsl:template name="footer.content">
<xsl:param name="pageclass" select="''"/>
<xsl:param name="sequence" select="''"/>
<xsl:param name="position" select="''"/>
<xsl:param name="gentext-key" select="''"/>
<xsl:variable name="Version">
<xsl:choose>
<xsl:when test="//d:releaseinfo">
<xsl:value-of select="//d:releaseinfo"/>
</xsl:when>
<xsl:otherwise>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:variable name="Title">
<xsl:choose>
<xsl:when test="//d:productname">
<xsl:value-of select="//d:productname"/><xsl:text> </xsl:text>
</xsl:when>
<xsl:otherwise>
<xsl:text>please define title in your docbook file!</xsl:text>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:choose>
<xsl:when test="$sequence='blank'">
<xsl:choose>
<xsl:when test="$double.sided != 0 and $position = 'left'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:when test="$double.sided = 0 and $position = 'center'">
</xsl:when>
<xsl:otherwise>
<fo:page-number/>
</xsl:otherwise>
</xsl:choose>
</xsl:when>
<xsl:when test="$pageclass='titlepage'">
</xsl:when>
<xsl:when test="$double.sided != 0 and $sequence = 'even' and $position='left'">
<fo:page-number/>
</xsl:when>
<xsl:when test="$double.sided != 0 and $sequence = 'odd' and $position='right'">
<fo:page-number/>
</xsl:when>
<xsl:when test="$double.sided = 0 and $position='right'">
<fo:page-number/>
</xsl:when>
<xsl:when test="$double.sided != 0 and $sequence = 'odd' and $position='left'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:when test="$double.sided != 0 and $sequence = 'even' and $position='right'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:when test="$double.sided = 0 and $position='left'">
<xsl:value-of select="$Version"/>
</xsl:when>
<xsl:when test="$position='center'">
<xsl:value-of select="$Title"/>
</xsl:when>
<xsl:otherwise>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="processing-instruction('hard-pagebreak')">
<fo:block break-before='page'/>
</xsl:template>
<!-- PAPER & PAGE SIZE -->
<!-- Paper type, no headers on blank pages, no double sided printing -->
<xsl:param name="double.sided">0</xsl:param>
<xsl:param name="headers.on.blank.pages">0</xsl:param>
<xsl:param name="footers.on.blank.pages">0</xsl:param>
<!-- FONTS & STYLES -->
<xsl:param name="hyphenate">false</xsl:param>
<!-- Default Font size -->
<xsl:param name="body.font.family">Helvetica</xsl:param>
<xsl:param name="body.font.master">10</xsl:param>
<xsl:param name="body.font.small">8</xsl:param>
<xsl:param name="title.font.family">Helvetica</xsl:param>
<!-- Line height in body text -->
<xsl:param name="line-height">1.4</xsl:param>
<!-- Chapter title size -->
<xsl:attribute-set name="chapter.titlepage.recto.style">
<xsl:attribute name="text-align">left</xsl:attribute>
<xsl:attribute name="font-weight">bold</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master * 1.8"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
</xsl:attribute-set>
<!-- Why is the font-size for chapters hardcoded in the XSL FO templates?
Let's remove it, so this sucker can use our attribute-set only... -->
<xsl:template match="d:title" mode="chapter.titlepage.recto.auto.mode">
<fo:block xmlns:fo="http://www.w3.org/1999/XSL/Format"
xsl:use-attribute-sets="chapter.titlepage.recto.style">
<xsl:call-template name="component.title">
<xsl:with-param name="node" select="ancestor-or-self::d:chapter[1]"/>
</xsl:call-template>
</fo:block>
</xsl:template>
<!-- Sections 1, 2 and 3 titles have a small bump factor and padding -->
<xsl:attribute-set name="section.title.level1.properties">
<xsl:attribute name="space-before.optimum">0.6em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.6em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.6em</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master * 1.5"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="section.title.level2.properties">
<xsl:attribute name="space-before.optimum">0.4em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.4em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.4em</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master * 1.25"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="section.title.level3.properties">
<xsl:attribute name="space-before.optimum">0.4em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.4em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.4em</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master * 1.0"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="section.title.level4.properties">
<xsl:attribute name="space-before.optimum">0.3em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.3em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.3em</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master * 0.9"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<!-- TABLES -->
<!-- Some padding inside tables -->
<xsl:attribute-set name="table.cell.padding">
<xsl:attribute name="padding-left">4pt</xsl:attribute>
<xsl:attribute name="padding-right">4pt</xsl:attribute>
<xsl:attribute name="padding-top">4pt</xsl:attribute>
<xsl:attribute name="padding-bottom">4pt</xsl:attribute>
</xsl:attribute-set>
<!-- Only hairlines as frame and cell borders in tables -->
<xsl:param name="table.frame.border.thickness">0.1pt</xsl:param>
<xsl:param name="table.cell.border.thickness">0.1pt</xsl:param>
<!-- LABELS -->
<!-- Label Chapters and Sections (numbering) -->
<xsl:param name="chapter.autolabel" select="1"/>
<xsl:param name="section.autolabel" select="1"/>
<xsl:param name="section.autolabel.max.depth" select="1"/>
<xsl:param name="section.label.includes.component.label" select="1"/>
<xsl:param name="table.footnote.number.format" select="'1'"/>
<!-- PROGRAMLISTINGS -->
<!-- Verbatim text formatting (programlistings) -->
<xsl:attribute-set name="monospace.verbatim.properties">
<xsl:attribute name="font-size">7pt</xsl:attribute>
<xsl:attribute name="wrap-option">wrap</xsl:attribute>
<xsl:attribute name="keep-together.within-column">1</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="verbatim.properties">
<xsl:attribute name="space-before.minimum">1em</xsl:attribute>
<xsl:attribute name="space-before.optimum">1em</xsl:attribute>
<xsl:attribute name="space-before.maximum">1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
<xsl:attribute name="border-color">#444444</xsl:attribute>
<xsl:attribute name="border-style">solid</xsl:attribute>
<xsl:attribute name="border-width">0.1pt</xsl:attribute>
<xsl:attribute name="padding-top">0.5em</xsl:attribute>
<xsl:attribute name="padding-left">0.5em</xsl:attribute>
<xsl:attribute name="padding-right">0.5em</xsl:attribute>
<xsl:attribute name="padding-bottom">0.5em</xsl:attribute>
<xsl:attribute name="margin-left">0.5em</xsl:attribute>
<xsl:attribute name="margin-right">0.5em</xsl:attribute>
</xsl:attribute-set>
<!-- Shade (background) programlistings -->
<xsl:param name="shade.verbatim">1</xsl:param>
<xsl:attribute-set name="shade.verbatim.style">
<xsl:attribute name="background-color">#F0F0F0</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="list.block.spacing">
<xsl:attribute name="space-before.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="example.properties">
<xsl:attribute name="space-before.minimum">0.5em</xsl:attribute>
<xsl:attribute name="space-before.optimum">0.5em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.5em</xsl:attribute>
<xsl:attribute name="space-after.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-after.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="sidebar.properties">
<xsl:attribute name="border-color">#444444</xsl:attribute>
<xsl:attribute name="border-style">solid</xsl:attribute>
<xsl:attribute name="border-width">0.1pt</xsl:attribute>
<xsl:attribute name="background-color">#F0F0F0</xsl:attribute>
</xsl:attribute-set>
<!-- TITLE INFORMATION FOR FIGURES, EXAMPLES ETC. -->
<xsl:attribute-set name="formal.title.properties" use-attribute-sets="normal.para.spacing">
<xsl:attribute name="font-weight">normal</xsl:attribute>
<xsl:attribute name="font-style">italic</xsl:attribute>
<xsl:attribute name="font-size">
<xsl:value-of select="$body.font.master"/>
<xsl:text>pt</xsl:text>
</xsl:attribute>
<xsl:attribute name="hyphenate">false</xsl:attribute>
<xsl:attribute name="space-before.minimum">0.1em</xsl:attribute>
<xsl:attribute name="space-before.optimum">0.1em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0.1em</xsl:attribute>
</xsl:attribute-set>
<!-- CALLOUTS -->
<!-- don't use images for callouts -->
<xsl:param name="callout.graphics">0</xsl:param>
<xsl:param name="callout.unicode">1</xsl:param>
<!-- Place callout marks at this column in annotated areas -->
<xsl:param name="callout.defaultcolumn">90</xsl:param>
<!-- MISC -->
<!-- Placement of titles -->
<xsl:param name="formal.title.placement">
figure after
example after
equation before
table before
procedure before
</xsl:param>
<!-- Format Variable Lists as Blocks (prevents horizontal overflow) -->
<xsl:param name="variablelist.as.blocks">1</xsl:param>
<xsl:param name="body.start.indent">0pt</xsl:param>
<!-- Remove "Chapter" from the Chapter titles... -->
<xsl:param name="local.l10n.xml" select="document('')"/>
<l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0">
<l:l10n language="en">
<l:context name="title-numbered">
<l:template name="chapter" text="%n.&#160;%t"/>
<l:template name="section" text="%n&#160;%t"/>
</l:context>
<l:context name="title">
<l:template name="example" text="Example&#160;%n&#160;%t"/>
</l:context>
</l:l10n>
</l:i18n>
<!-- admon -->
<xsl:param name="admon.graphics" select="0"/>
<xsl:attribute-set name="nongraphical.admonition.properties">
<xsl:attribute name="margin-left">0.1em</xsl:attribute>
<xsl:attribute name="margin-right">2em</xsl:attribute>
<xsl:attribute name="border-left-width">.75pt</xsl:attribute>
<xsl:attribute name="border-left-style">solid</xsl:attribute>
<xsl:attribute name="border-left-color">#5c5c4f</xsl:attribute>
<xsl:attribute name="padding-left">0.5em</xsl:attribute>
<xsl:attribute name="space-before.optimum">1.5em</xsl:attribute>
<xsl:attribute name="space-before.minimum">1.5em</xsl:attribute>
<xsl:attribute name="space-before.maximum">1.5em</xsl:attribute>
<xsl:attribute name="space-after.optimum">1.5em</xsl:attribute>
<xsl:attribute name="space-after.minimum">1.5em</xsl:attribute>
<xsl:attribute name="space-after.maximum">1.5em</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="admonition.title.properties">
<xsl:attribute name="font-size">10pt</xsl:attribute>
<xsl:attribute name="font-weight">bold</xsl:attribute>
<xsl:attribute name="hyphenate">false</xsl:attribute>
<xsl:attribute name="keep-with-next.within-column">always</xsl:attribute>
<xsl:attribute name="margin-left">0</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="admonition.properties">
<xsl:attribute name="space-before.optimum">0em</xsl:attribute>
<xsl:attribute name="space-before.minimum">0em</xsl:attribute>
<xsl:attribute name="space-before.maximum">0em</xsl:attribute>
</xsl:attribute-set>
<!-- Asciidoc -->
<xsl:template match="processing-instruction('asciidoc-br')">
<fo:block/>
</xsl:template>
<xsl:template match="processing-instruction('asciidoc-hr')">
<fo:block space-after="1em">
<fo:leader leader-pattern="rule" rule-thickness="0.5pt" rule-style="solid" leader-length.minimum="100%"/>
</fo:block>
</xsl:template>
<xsl:template match="processing-instruction('asciidoc-pagebreak')">
<fo:block break-after='page'/>
</xsl:template>
<!-- SYNTAX HIGHLIGHT -->
<xsl:template match='xslthl:keyword' mode="xslthl">
<fo:inline font-weight="bold" color="#7F0055"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
<xsl:template match='xslthl:string' mode="xslthl">
<fo:inline font-weight="bold" font-style="italic" color="#2A00FF"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
<xsl:template match='xslthl:comment' mode="xslthl">
<fo:inline font-style="italic" color="#3F5FBF"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
<xsl:template match='xslthl:tag' mode="xslthl">
<fo:inline font-weight="bold" color="#3F7F7F"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
<xsl:template match='xslthl:attribute' mode="xslthl">
<fo:inline font-weight="bold" color="#7F007F"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
<xsl:template match='xslthl:value' mode="xslthl">
<fo:inline font-weight="bold" color="#2A00FF"><xsl:apply-templates mode="xslthl"/></fo:inline>
</xsl:template>
</xsl:stylesheet>

View File

@@ -1,23 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<xslthl-config>
<highlighter id="java" file="./xslthl/java-hl.xml" />
<highlighter id="groovy" file="./xslthl/java-hl.xml" />
<highlighter id="html" file="./xslthl/html-hl.xml" />
<highlighter id="ini" file="./xslthl/ini-hl.xml" />
<highlighter id="php" file="./xslthl/php-hl.xml" />
<highlighter id="c" file="./xslthl/c-hl.xml" />
<highlighter id="cpp" file="./xslthl/cpp-hl.xml" />
<highlighter id="csharp" file="./xslthl/csharp-hl.xml" />
<highlighter id="python" file="./xslthl/python-hl.xml" />
<highlighter id="ruby" file="./xslthl/ruby-hl.xml" />
<highlighter id="perl" file="./xslthl/perl-hl.xml" />
<highlighter id="javascript" file="./xslthl/javascript-hl.xml" />
<highlighter id="bash" file="./xslthl/bourne-hl.xml" />
<highlighter id="css" file="./xslthl/css-hl.xml" />
<highlighter id="sql" file="./xslthl/sql2003-hl.xml" />
<highlighter id="asciidoc" file="./xslthl/asciidoc-hl.xml" />
<highlighter id="properties" file="./xslthl/properties-hl.xml" />
<highlighter id="json" file="./xslthl/json-hl.xml" />
<highlighter id="yaml" file="./xslthl/yaml-hl.xml" />
<namespace prefix="xslthl" uri="http://xslthl.sf.net" />
</xslthl-config>

View File

@@ -1,41 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for AsciiDoc files
-->
<highlighters>
<highlighter type="multiline-comment">
<start>////</start>
<end>////</end>
</highlighter>
<highlighter type="oneline-comment">
<start>//</start>
<solitary/>
</highlighter>
<highlighter type="regex">
<pattern>^(={1,6} .+)$</pattern>
<style>heading</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<pattern>^(\.[^\.\s].+)$</pattern>
<style>title</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<pattern>^(:!?\w.*?:)</pattern>
<style>attribute</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<pattern>^(-|\*{1,5}|\d*\.{1,5})(?= .+$)</pattern>
<style>bullet</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<pattern>^(\[.+\])$</pattern>
<style>attribute</style>
<flags>MULTILINE</flags>
</highlighter>
</highlighters>

View File

@@ -1,95 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
Syntax highlighting definition for SH
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2010 Mathieu Malaterre
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
-->
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="heredoc">
<start>&lt;&lt;</start>
<quote>'</quote>
<quote>"</quote>
<flag>-</flag>
<noWhiteSpace />
<looseTerminator />
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
<spanNewLines />
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<!-- reserved words -->
<keyword>if</keyword>
<keyword>then</keyword>
<keyword>else</keyword>
<keyword>elif</keyword>
<keyword>fi</keyword>
<keyword>case</keyword>
<keyword>esac</keyword>
<keyword>for</keyword>
<keyword>while</keyword>
<keyword>until</keyword>
<keyword>do</keyword>
<keyword>done</keyword>
<!-- built-ins -->
<keyword>exec</keyword>
<keyword>shift</keyword>
<keyword>exit</keyword>
<keyword>times</keyword>
<keyword>break</keyword>
<keyword>export</keyword>
<keyword>trap</keyword>
<keyword>continue</keyword>
<keyword>readonly</keyword>
<keyword>wait</keyword>
<keyword>eval</keyword>
<keyword>return</keyword>
<!-- other commands -->
<keyword>cd</keyword>
<keyword>echo</keyword>
<keyword>hash</keyword>
<keyword>pwd</keyword>
<keyword>read</keyword>
<keyword>set</keyword>
<keyword>test</keyword>
<keyword>type</keyword>
<keyword>ulimit</keyword>
<keyword>umask</keyword>
<keyword>unset</keyword>
</highlighter>
</highlighters>

View File

@@ -1,117 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for C
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/**</start>
<end>*/</end>
<style>doccomment</style>
</highlighter>
<highlighter type="oneline-comment">
<start><![CDATA[/// ]]></start>
<style>doccomment</style>
</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="oneline-comment">
<!-- use the online-comment highlighter to detect directives -->
<start>#</start>
<lineBreakEscape>\</lineBreakEscape>
<style>directive</style>
<solitary />
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<exponent>e</exponent>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>f</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>auto</keyword>
<keyword>_Bool</keyword>
<keyword>break</keyword>
<keyword>case</keyword>
<keyword>char</keyword>
<keyword>_Complex</keyword>
<keyword>const</keyword>
<keyword>continue</keyword>
<keyword>default</keyword>
<keyword>do</keyword>
<keyword>double</keyword>
<keyword>else</keyword>
<keyword>enum</keyword>
<keyword>extern</keyword>
<keyword>float</keyword>
<keyword>for</keyword>
<keyword>goto</keyword>
<keyword>if</keyword>
<keyword>_Imaginary</keyword>
<keyword>inline</keyword>
<keyword>int</keyword>
<keyword>long</keyword>
<keyword>register</keyword>
<keyword>restrict</keyword>
<keyword>return</keyword>
<keyword>short</keyword>
<keyword>signed</keyword>
<keyword>sizeof</keyword>
<keyword>static</keyword>
<keyword>struct</keyword>
<keyword>switch</keyword>
<keyword>typedef</keyword>
<keyword>union</keyword>
<keyword>unsigned</keyword>
<keyword>void</keyword>
<keyword>volatile</keyword>
<keyword>while</keyword>
</highlighter>
</highlighters>

View File

@@ -1,151 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for C++
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/**</start>
<end>*/</end>
<style>doccomment</style>
</highlighter>
<highlighter type="oneline-comment">
<start><![CDATA[/// ]]></start>
<style>doccomment</style>
</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="oneline-comment">
<!-- use the online-comment highlighter to detect directives -->
<start>#</start>
<lineBreakEscape>\</lineBreakEscape>
<style>directive</style>
<solitary/>
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<exponent>e</exponent>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>f</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<!-- C keywords -->
<keyword>auto</keyword>
<keyword>_Bool</keyword>
<keyword>break</keyword>
<keyword>case</keyword>
<keyword>char</keyword>
<keyword>_Complex</keyword>
<keyword>const</keyword>
<keyword>continue</keyword>
<keyword>default</keyword>
<keyword>do</keyword>
<keyword>double</keyword>
<keyword>else</keyword>
<keyword>enum</keyword>
<keyword>extern</keyword>
<keyword>float</keyword>
<keyword>for</keyword>
<keyword>goto</keyword>
<keyword>if</keyword>
<keyword>_Imaginary</keyword>
<keyword>inline</keyword>
<keyword>int</keyword>
<keyword>long</keyword>
<keyword>register</keyword>
<keyword>restrict</keyword>
<keyword>return</keyword>
<keyword>short</keyword>
<keyword>signed</keyword>
<keyword>sizeof</keyword>
<keyword>static</keyword>
<keyword>struct</keyword>
<keyword>switch</keyword>
<keyword>typedef</keyword>
<keyword>union</keyword>
<keyword>unsigned</keyword>
<keyword>void</keyword>
<keyword>volatile</keyword>
<keyword>while</keyword>
<!-- C++ keywords -->
<keyword>asm</keyword>
<keyword>dynamic_cast</keyword>
<keyword>namespace</keyword>
<keyword>reinterpret_cast</keyword>
<keyword>try</keyword>
<keyword>bool</keyword>
<keyword>explicit</keyword>
<keyword>new</keyword>
<keyword>static_cast</keyword>
<keyword>typeid</keyword>
<keyword>catch</keyword>
<keyword>false</keyword>
<keyword>operator</keyword>
<keyword>template</keyword>
<keyword>typename</keyword>
<keyword>class</keyword>
<keyword>friend</keyword>
<keyword>private</keyword>
<keyword>this</keyword>
<keyword>using</keyword>
<keyword>const_cast</keyword>
<keyword>inline</keyword>
<keyword>public</keyword>
<keyword>throw</keyword>
<keyword>virtual</keyword>
<keyword>delete</keyword>
<keyword>mutable</keyword>
<keyword>protected</keyword>
<keyword>true</keyword>
<keyword>wchar_t</keyword>
</highlighter>
</highlighters>

View File

@@ -1,194 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for C#
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/**</start>
<end>*/</end>
<style>doccomment</style>
</highlighter>
<highlighter type="oneline-comment">
<start>///</start>
<style>doccomment</style>
</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="annotation">
<!-- annotations are called (custom) "attributes" in .NET -->
<start>[</start>
<end>]</end>
<valueStart>(</valueStart>
<valueEnd>)</valueEnd>
</highlighter>
<highlighter type="oneline-comment">
<!-- C# supports a couple of directives -->
<start>#</start>
<lineBreakEscape>\</lineBreakEscape>
<style>directive</style>
<solitary/>
</highlighter>
<highlighter type="string">
<!-- strings starting with an "@" can span multiple lines -->
<string>@"</string>
<endString>"</endString>
<escape>\</escape>
<spanNewLines />
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<exponent>e</exponent>
<suffix>ul</suffix>
<suffix>lu</suffix>
<suffix>u</suffix>
<suffix>f</suffix>
<suffix>d</suffix>
<suffix>m</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>abstract</keyword>
<keyword>as</keyword>
<keyword>base</keyword>
<keyword>bool</keyword>
<keyword>break</keyword>
<keyword>byte</keyword>
<keyword>case</keyword>
<keyword>catch</keyword>
<keyword>char</keyword>
<keyword>checked</keyword>
<keyword>class</keyword>
<keyword>const</keyword>
<keyword>continue</keyword>
<keyword>decimal</keyword>
<keyword>default</keyword>
<keyword>delegate</keyword>
<keyword>do</keyword>
<keyword>double</keyword>
<keyword>else</keyword>
<keyword>enum</keyword>
<keyword>event</keyword>
<keyword>explicit</keyword>
<keyword>extern</keyword>
<keyword>false</keyword>
<keyword>finally</keyword>
<keyword>fixed</keyword>
<keyword>float</keyword>
<keyword>for</keyword>
<keyword>foreach</keyword>
<keyword>goto</keyword>
<keyword>if</keyword>
<keyword>implicit</keyword>
<keyword>in</keyword>
<keyword>int</keyword>
<keyword>interface</keyword>
<keyword>internal</keyword>
<keyword>is</keyword>
<keyword>lock</keyword>
<keyword>long</keyword>
<keyword>namespace</keyword>
<keyword>new</keyword>
<keyword>null</keyword>
<keyword>object</keyword>
<keyword>operator</keyword>
<keyword>out</keyword>
<keyword>override</keyword>
<keyword>params</keyword>
<keyword>private</keyword>
<keyword>protected</keyword>
<keyword>public</keyword>
<keyword>readonly</keyword>
<keyword>ref</keyword>
<keyword>return</keyword>
<keyword>sbyte</keyword>
<keyword>sealed</keyword>
<keyword>short</keyword>
<keyword>sizeof</keyword>
<keyword>stackalloc</keyword>
<keyword>static</keyword>
<keyword>string</keyword>
<keyword>struct</keyword>
<keyword>switch</keyword>
<keyword>this</keyword>
<keyword>throw</keyword>
<keyword>true</keyword>
<keyword>try</keyword>
<keyword>typeof</keyword>
<keyword>uint</keyword>
<keyword>ulong</keyword>
<keyword>unchecked</keyword>
<keyword>unsafe</keyword>
<keyword>ushort</keyword>
<keyword>using</keyword>
<keyword>virtual</keyword>
<keyword>void</keyword>
<keyword>volatile</keyword>
<keyword>while</keyword>
</highlighter>
<highlighter type="keywords">
<!-- special words, not really keywords -->
<keyword>add</keyword>
<keyword>alias</keyword>
<keyword>from</keyword>
<keyword>get</keyword>
<keyword>global</keyword>
<keyword>group</keyword>
<keyword>into</keyword>
<keyword>join</keyword>
<keyword>orderby</keyword>
<keyword>partial</keyword>
<keyword>remove</keyword>
<keyword>select</keyword>
<keyword>set</keyword>
<keyword>value</keyword>
<keyword>where</keyword>
<keyword>yield</keyword>
</highlighter>
</highlighters>

View File

@@ -1,176 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for CSS files
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2011-2012 Martin Hujer, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Martin Hujer <mhujer at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
Reference: http://www.w3.org/TR/CSS21/propidx.html
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
<spanNewLines/>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
<spanNewLines/>
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
</highlighter>
<highlighter type="word">
<word>@charset</word>
<word>@import</word>
<word>@media</word>
<word>@page</word>
<style>directive</style>
</highlighter>
<highlighter type="keywords">
<partChars>-</partChars>
<keyword>azimuth</keyword>
<keyword>background-attachment</keyword>
<keyword>background-color</keyword>
<keyword>background-image</keyword>
<keyword>background-position</keyword>
<keyword>background-repeat</keyword>
<keyword>background</keyword>
<keyword>border-collapse</keyword>
<keyword>border-color</keyword>
<keyword>border-spacing</keyword>
<keyword>border-style</keyword>
<keyword>border-top</keyword>
<keyword>border-right</keyword>
<keyword>border-bottom</keyword>
<keyword>border-left</keyword>
<keyword>border-top-color</keyword>
<keyword>border-right-color</keyword>
<keyword>border-bottom-color</keyword>
<keyword>border-left-color</keyword>
<keyword>border-top-style</keyword>
<keyword>border-right-style</keyword>
<keyword>border-bottom-style</keyword>
<keyword>border-left-style</keyword>
<keyword>border-top-width</keyword>
<keyword>border-right-width</keyword>
<keyword>border-bottom-width</keyword>
<keyword>border-left-width</keyword>
<keyword>border-width</keyword>
<keyword>border</keyword>
<keyword>bottom</keyword>
<keyword>caption-side</keyword>
<keyword>clear</keyword>
<keyword>clip</keyword>
<keyword>color</keyword>
<keyword>content</keyword>
<keyword>counter-increment</keyword>
<keyword>counter-reset</keyword>
<keyword>cue-after</keyword>
<keyword>cue-before</keyword>
<keyword>cue</keyword>
<keyword>cursor</keyword>
<keyword>direction</keyword>
<keyword>display</keyword>
<keyword>elevation</keyword>
<keyword>empty-cells</keyword>
<keyword>float</keyword>
<keyword>font-family</keyword>
<keyword>font-size</keyword>
<keyword>font-style</keyword>
<keyword>font-variant</keyword>
<keyword>font-weight</keyword>
<keyword>font</keyword>
<keyword>height</keyword>
<keyword>left</keyword>
<keyword>letter-spacing</keyword>
<keyword>line-height</keyword>
<keyword>list-style-image</keyword>
<keyword>list-style-position</keyword>
<keyword>list-style-type</keyword>
<keyword>list-style</keyword>
<keyword>margin-right</keyword>
<keyword>margin-left</keyword>
<keyword>margin-top</keyword>
<keyword>margin-bottom</keyword>
<keyword>margin</keyword>
<keyword>max-height</keyword>
<keyword>max-width</keyword>
<keyword>min-height</keyword>
<keyword>min-width</keyword>
<keyword>orphans</keyword>
<keyword>outline-color</keyword>
<keyword>outline-style</keyword>
<keyword>outline-width</keyword>
<keyword>outline</keyword>
<keyword>overflow</keyword>
<keyword>padding-top</keyword>
<keyword>padding-right</keyword>
<keyword>padding-bottom</keyword>
<keyword>padding-left</keyword>
<keyword>padding</keyword>
<keyword>page-break-after</keyword>
<keyword>page-break-before</keyword>
<keyword>page-break-inside</keyword>
<keyword>pause-after</keyword>
<keyword>pause-before</keyword>
<keyword>pause</keyword>
<keyword>pitch-range</keyword>
<keyword>pitch</keyword>
<keyword>play-during</keyword>
<keyword>position</keyword>
<keyword>quotes</keyword>
<keyword>richness</keyword>
<keyword>right</keyword>
<keyword>speak-header</keyword>
<keyword>speak-numeral</keyword>
<keyword>speak-punctuation</keyword>
<keyword>speak</keyword>
<keyword>speech-rate</keyword>
<keyword>stress</keyword>
<keyword>table-layout</keyword>
<keyword>text-align</keyword>
<keyword>text-decoration</keyword>
<keyword>text-indent</keyword>
<keyword>text-transform</keyword>
<keyword>top</keyword>
<keyword>unicode-bidi</keyword>
<keyword>vertical-align</keyword>
<keyword>visibility</keyword>
<keyword>voice-family</keyword>
<keyword>volume</keyword>
<keyword>white-space</keyword>
<keyword>widows</keyword>
<keyword>width</keyword>
<keyword>word-spacing</keyword>
<keyword>z-index</keyword>
</highlighter>
</highlighters>

View File

@@ -1,122 +0,0 @@
<?xml version='1.0'?>
<!--
Bakalarska prace: Zvyraznovani syntaxe v XSLT
Michal Molhanec 2005
myxml-hl.xml - konfigurace zvyraznovace XML, ktera zvlast zvyrazni
HTML elementy a XSL elementy
This file has been customized for the Asciidoctor project (http://asciidoctor.org).
-->
<highlighters>
<highlighter type="xml">
<elementSet>
<style>htmltag</style>
<element>a</element>
<element>abbr</element>
<element>address</element>
<element>area</element>
<element>article</element>
<element>aside</element>
<element>audio</element>
<element>b</element>
<element>base</element>
<element>bdi</element>
<element>blockquote</element>
<element>body</element>
<element>br</element>
<element>button</element>
<element>caption</element>
<element>canvas</element>
<element>cite</element>
<element>code</element>
<element>command</element>
<element>col</element>
<element>colgroup</element>
<element>dd</element>
<element>del</element>
<element>dialog</element>
<element>div</element>
<element>dl</element>
<element>dt</element>
<element>em</element>
<element>embed</element>
<element>fieldset</element>
<element>figcaption</element>
<element>figure</element>
<element>font</element>
<element>form</element>
<element>footer</element>
<element>h1</element>
<element>h2</element>
<element>h3</element>
<element>h4</element>
<element>h5</element>
<element>h6</element>
<element>head</element>
<element>header</element>
<element>hr</element>
<element>html</element>
<element>i</element>
<element>iframe</element>
<element>img</element>
<element>input</element>
<element>ins</element>
<element>kbd</element>
<element>label</element>
<element>legend</element>
<element>li</element>
<element>link</element>
<element>map</element>
<element>mark</element>
<element>menu</element>
<element>menu</element>
<element>meta</element>
<element>nav</element>
<element>noscript</element>
<element>object</element>
<element>ol</element>
<element>optgroup</element>
<element>option</element>
<element>p</element>
<element>param</element>
<element>pre</element>
<element>q</element>
<element>samp</element>
<element>script</element>
<element>section</element>
<element>select</element>
<element>small</element>
<element>source</element>
<element>span</element>
<element>strong</element>
<element>style</element>
<element>sub</element>
<element>summary</element>
<element>sup</element>
<element>table</element>
<element>tbody</element>
<element>td</element>
<element>textarea</element>
<element>tfoot</element>
<element>th</element>
<element>thead</element>
<element>time</element>
<element>title</element>
<element>tr</element>
<element>track</element>
<element>u</element>
<element>ul</element>
<element>var</element>
<element>video</element>
<element>wbr</element>
<element>xmp</element>
<ignoreCase/>
</elementSet>
<elementPrefix>
<style>namespace</style>
<prefix>xsl:</prefix>
</elementPrefix>
</highlighter>
</highlighters>

View File

@@ -1,45 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for ini files
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="oneline-comment">;</highlighter>
<highlighter type="regex">
<!-- ini sections -->
<pattern>^(\[.+\]\s*)$</pattern>
<style>keyword</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<!-- the keys in an ini section -->
<pattern>^(.+)(?==)</pattern>
<style>attribute</style>
<flags>MULTILINE</flags>
</highlighter>
</highlighters>

View File

@@ -1,117 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/**</start>
<end>*/</end>
<style>doccomment</style>
</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="annotation">
<start>@</start>
<valueStart>(</valueStart>
<valueEnd>)</valueEnd>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<suffix>f</suffix>
<suffix>d</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>abstract</keyword>
<keyword>boolean</keyword>
<keyword>break</keyword>
<keyword>byte</keyword>
<keyword>case</keyword>
<keyword>catch</keyword>
<keyword>char</keyword>
<keyword>class</keyword>
<keyword>const</keyword>
<keyword>continue</keyword>
<keyword>default</keyword>
<keyword>do</keyword>
<keyword>double</keyword>
<keyword>else</keyword>
<keyword>extends</keyword>
<keyword>final</keyword>
<keyword>finally</keyword>
<keyword>float</keyword>
<keyword>for</keyword>
<keyword>goto</keyword>
<keyword>if</keyword>
<keyword>implements</keyword>
<keyword>import</keyword>
<keyword>instanceof</keyword>
<keyword>int</keyword>
<keyword>interface</keyword>
<keyword>long</keyword>
<keyword>native</keyword>
<keyword>new</keyword>
<keyword>package</keyword>
<keyword>private</keyword>
<keyword>protected</keyword>
<keyword>public</keyword>
<keyword>return</keyword>
<keyword>short</keyword>
<keyword>static</keyword>
<keyword>strictfp</keyword>
<keyword>super</keyword>
<keyword>switch</keyword>
<keyword>synchronized</keyword>
<keyword>this</keyword>
<keyword>throw</keyword>
<keyword>throws</keyword>
<keyword>transient</keyword>
<keyword>try</keyword>
<keyword>void</keyword>
<keyword>volatile</keyword>
<keyword>while</keyword>
</highlighter>
</highlighters>

View File

@@ -1,147 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for JavaScript
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>break</keyword>
<keyword>case</keyword>
<keyword>catch</keyword>
<keyword>continue</keyword>
<keyword>default</keyword>
<keyword>delete</keyword>
<keyword>do</keyword>
<keyword>else</keyword>
<keyword>finally</keyword>
<keyword>for</keyword>
<keyword>function</keyword>
<keyword>if</keyword>
<keyword>in</keyword>
<keyword>instanceof</keyword>
<keyword>new</keyword>
<keyword>return</keyword>
<keyword>switch</keyword>
<keyword>this</keyword>
<keyword>throw</keyword>
<keyword>try</keyword>
<keyword>typeof</keyword>
<keyword>var</keyword>
<keyword>void</keyword>
<keyword>while</keyword>
<keyword>with</keyword>
<!-- future keywords -->
<keyword>abstract</keyword>
<keyword>boolean</keyword>
<keyword>byte</keyword>
<keyword>char</keyword>
<keyword>class</keyword>
<keyword>const</keyword>
<keyword>debugger</keyword>
<keyword>double</keyword>
<keyword>enum</keyword>
<keyword>export</keyword>
<keyword>extends</keyword>
<keyword>final</keyword>
<keyword>float</keyword>
<keyword>goto</keyword>
<keyword>implements</keyword>
<keyword>import</keyword>
<keyword>int</keyword>
<keyword>interface</keyword>
<keyword>long</keyword>
<keyword>native</keyword>
<keyword>package</keyword>
<keyword>private</keyword>
<keyword>protected</keyword>
<keyword>public</keyword>
<keyword>short</keyword>
<keyword>static</keyword>
<keyword>super</keyword>
<keyword>synchronized</keyword>
<keyword>throws</keyword>
<keyword>transient</keyword>
<keyword>volatile</keyword>
</highlighter>
<highlighter type="keywords">
<keyword>prototype</keyword>
<!-- Global Objects -->
<keyword>Array</keyword>
<keyword>Boolean</keyword>
<keyword>Date</keyword>
<keyword>Error</keyword>
<keyword>EvalError</keyword>
<keyword>Function</keyword>
<keyword>Math</keyword>
<keyword>Number</keyword>
<keyword>Object</keyword>
<keyword>RangeError</keyword>
<keyword>ReferenceError</keyword>
<keyword>RegExp</keyword>
<keyword>String</keyword>
<keyword>SyntaxError</keyword>
<keyword>TypeError</keyword>
<keyword>URIError</keyword>
<!-- Global functions -->
<keyword>decodeURI</keyword>
<keyword>decodeURIComponent</keyword>
<keyword>encodeURI</keyword>
<keyword>encodeURIComponent</keyword>
<keyword>eval</keyword>
<keyword>isFinite</keyword>
<keyword>isNaN</keyword>
<keyword>parseFloat</keyword>
<keyword>parseInt</keyword>
<!-- Global properties -->
<keyword>Infinity</keyword>
<keyword>NaN</keyword>
<keyword>undefined</keyword>
</highlighter>
</highlighters>

View File

@@ -1,37 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="annotation">
<start>@</start>
<valueStart>(</valueStart>
<valueEnd>)</valueEnd>
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<suffix>f</suffix>
<suffix>d</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>true</keyword>
<keyword>false</keyword>
</highlighter>
<highlighter type="word">
<word>{</word>
<word>}</word>
<word>,</word>
<word>[</word>
<word>]</word>
<style>keyword</style>
</highlighter>
</highlighters>

View File

@@ -1,120 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for Perl
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="heredoc">
<start>&lt;&lt;</start>
<quote>'</quote>
<quote>"</quote>
<noWhiteSpace/>
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
<spanNewLines/>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>if</keyword>
<keyword>unless</keyword>
<keyword>while</keyword>
<keyword>until</keyword>
<keyword>foreach</keyword>
<keyword>else</keyword>
<keyword>elsif</keyword>
<keyword>for</keyword>
<keyword>when</keyword>
<keyword>default</keyword>
<keyword>given</keyword>
<!-- Keywords related to the control flow of your perl program -->
<keyword>caller</keyword>
<keyword>continue</keyword>
<keyword>die</keyword>
<keyword>do</keyword>
<keyword>dump</keyword>
<keyword>eval</keyword>
<keyword>exit</keyword>
<keyword>goto</keyword>
<keyword>last</keyword>
<keyword>next</keyword>
<keyword>redo</keyword>
<keyword>return</keyword>
<keyword>sub</keyword>
<keyword>wantarray</keyword>
<!-- Keywords related to scoping -->
<keyword>caller</keyword>
<keyword>import</keyword>
<keyword>local</keyword>
<keyword>my</keyword>
<keyword>package</keyword>
<keyword>use</keyword>
<!-- Keywords related to perl modules -->
<keyword>do</keyword>
<keyword>import</keyword>
<keyword>no</keyword>
<keyword>package</keyword>
<keyword>require</keyword>
<keyword>use</keyword>
<!-- Keywords related to classes and object-orientedness -->
<keyword>bless</keyword>
<keyword>dbmclose</keyword>
<keyword>dbmopen</keyword>
<keyword>package</keyword>
<keyword>ref</keyword>
<keyword>tie</keyword>
<keyword>tied</keyword>
<keyword>untie</keyword>
<keyword>use</keyword>
<!-- operators -->
<keyword>and</keyword>
<keyword>or</keyword>
<keyword>not</keyword>
<keyword>eq</keyword>
<keyword>ne</keyword>
<keyword>lt</keyword>
<keyword>gt</keyword>
<keyword>le</keyword>
<keyword>ge</keyword>
<keyword>cmp</keyword>
</highlighter>
</highlighters>

View File

@@ -1,154 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for PHP
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="multiline-comment">
<start>/**</start>
<end>*/</end>
<style>doccomment</style>
</highlighter>
<highlighter type="oneline-comment">
<start><![CDATA[/// ]]></start>
<style>doccomment</style>
</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="oneline-comment">//</highlighter>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
<spanNewLines />
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
<spanNewLines />
</highlighter>
<highlighter type="heredoc">
<start>&lt;&lt;&lt;</start>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>and</keyword>
<keyword>or</keyword>
<keyword>xor</keyword>
<keyword>__FILE__</keyword>
<keyword>exception</keyword>
<keyword>__LINE__</keyword>
<keyword>array</keyword>
<keyword>as</keyword>
<keyword>break</keyword>
<keyword>case</keyword>
<keyword>class</keyword>
<keyword>const</keyword>
<keyword>continue</keyword>
<keyword>declare</keyword>
<keyword>default</keyword>
<keyword>die</keyword>
<keyword>do</keyword>
<keyword>echo</keyword>
<keyword>else</keyword>
<keyword>elseif</keyword>
<keyword>empty</keyword>
<keyword>enddeclare</keyword>
<keyword>endfor</keyword>
<keyword>endforeach</keyword>
<keyword>endif</keyword>
<keyword>endswitch</keyword>
<keyword>endwhile</keyword>
<keyword>eval</keyword>
<keyword>exit</keyword>
<keyword>extends</keyword>
<keyword>for</keyword>
<keyword>foreach</keyword>
<keyword>function</keyword>
<keyword>global</keyword>
<keyword>if</keyword>
<keyword>include</keyword>
<keyword>include_once</keyword>
<keyword>isset</keyword>
<keyword>list</keyword>
<keyword>new</keyword>
<keyword>print</keyword>
<keyword>require</keyword>
<keyword>require_once</keyword>
<keyword>return</keyword>
<keyword>static</keyword>
<keyword>switch</keyword>
<keyword>unset</keyword>
<keyword>use</keyword>
<keyword>var</keyword>
<keyword>while</keyword>
<keyword>__FUNCTION__</keyword>
<keyword>__CLASS__</keyword>
<keyword>__METHOD__</keyword>
<keyword>final</keyword>
<keyword>php_user_filter</keyword>
<keyword>interface</keyword>
<keyword>implements</keyword>
<keyword>extends</keyword>
<keyword>public</keyword>
<keyword>private</keyword>
<keyword>protected</keyword>
<keyword>abstract</keyword>
<keyword>clone</keyword>
<keyword>try</keyword>
<keyword>catch</keyword>
<keyword>throw</keyword>
<keyword>cfunction</keyword>
<keyword>old_function</keyword>
<keyword>true</keyword>
<keyword>false</keyword>
<!-- PHP 5.3 -->
<keyword>namespace</keyword>
<keyword>__NAMESPACE__</keyword>
<keyword>goto</keyword>
<keyword>__DIR__</keyword>
<ignoreCase />
</highlighter>
<highlighter type="word">
<!-- highlight the php open and close tags as directives -->
<word>?&gt;</word>
<word>&lt;?php</word>
<word>&lt;?=</word>
<style>directive</style>
</highlighter>
</highlighters>

View File

@@ -1,38 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for Java
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="regex">
<pattern>^(.+?)(?==|:)</pattern>
<style>attribute</style>
<flags>MULTILINE</flags>
</highlighter>
</highlighters>

View File

@@ -1,100 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for Python
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="annotation">
<!-- these are actually called decorators -->
<start>@</start>
<valueStart>(</valueStart>
<valueEnd>)</valueEnd>
</highlighter>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="string">
<string>"""</string>
<spanNewLines />
</highlighter>
<highlighter type="string">
<string>'''</string>
<spanNewLines />
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<exponent>e</exponent>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>and</keyword>
<keyword>del</keyword>
<keyword>from</keyword>
<keyword>not</keyword>
<keyword>while</keyword>
<keyword>as</keyword>
<keyword>elif</keyword>
<keyword>global</keyword>
<keyword>or</keyword>
<keyword>with</keyword>
<keyword>assert</keyword>
<keyword>else</keyword>
<keyword>if</keyword>
<keyword>pass</keyword>
<keyword>yield</keyword>
<keyword>break</keyword>
<keyword>except</keyword>
<keyword>import</keyword>
<keyword>print</keyword>
<keyword>class</keyword>
<keyword>exec</keyword>
<keyword>in</keyword>
<keyword>raise</keyword>
<keyword>continue</keyword>
<keyword>finally</keyword>
<keyword>is</keyword>
<keyword>return</keyword>
<keyword>def</keyword>
<keyword>for</keyword>
<keyword>lambda</keyword>
<keyword>try</keyword>
</highlighter>
</highlighters>

View File

@@ -1,109 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Syntax highlighting definition for Ruby
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2005-2008 Michal Molhanec, Jirka Kosek, Michiel Hendriks
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Michal Molhanec <mol1111 at users.sourceforge.net>
Jirka Kosek <kosek at users.sourceforge.net>
Michiel Hendriks <elmuerte at users.sourceforge.net>
-->
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="heredoc">
<start>&lt;&lt;</start>
<noWhiteSpace/>
</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>%Q{</string>
<endString>}</endString>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>%/</string>
<endString>/</endString>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>%q{</string>
<endString>}</endString>
<escape>\</escape>
</highlighter>
<highlighter type="hexnumber">
<prefix>0x</prefix>
<ignoreCase />
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>alias</keyword>
<keyword>and</keyword>
<keyword>BEGIN</keyword>
<keyword>begin</keyword>
<keyword>break</keyword>
<keyword>case</keyword>
<keyword>class</keyword>
<keyword>def</keyword>
<keyword>defined</keyword>
<keyword>do</keyword>
<keyword>else</keyword>
<keyword>elsif</keyword>
<keyword>END</keyword>
<keyword>end</keyword>
<keyword>ensure</keyword>
<keyword>false</keyword>
<keyword>for</keyword>
<keyword>if</keyword>
<keyword>in</keyword>
<keyword>module</keyword>
<keyword>next</keyword>
<keyword>nil</keyword>
<keyword>not</keyword>
<keyword>or</keyword>
<keyword>redo</keyword>
<keyword>rescue</keyword>
<keyword>retry</keyword>
<keyword>return</keyword>
<keyword>self</keyword>
<keyword>super</keyword>
<keyword>then</keyword>
<keyword>true</keyword>
<keyword>undef</keyword>
<keyword>unless</keyword>
<keyword>until</keyword>
<keyword>when</keyword>
<keyword>while</keyword>
<keyword>yield</keyword>
</highlighter>
</highlighters>

View File

@@ -1,565 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
Syntax highlighting definition for SQL:1999
xslthl - XSLT Syntax Highlighting
http://sourceforge.net/projects/xslthl/
Copyright (C) 2012 Michiel Hendriks, Martin Hujer, k42b3
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
-->
<highlighters>
<highlighter type="oneline-comment">--</highlighter>
<highlighter type="multiline-comment">
<start>/*</start>
<end>*/</end>
</highlighter>
<highlighter type="string">
<string>'</string>
<doubleEscapes />
</highlighter>
<highlighter type="string">
<string>U'</string>
<endString>'</endString>
<doubleEscapes />
</highlighter>
<highlighter type="string">
<string>B'</string>
<endString>'</endString>
<doubleEscapes />
</highlighter>
<highlighter type="string">
<string>N'</string>
<endString>'</endString>
<doubleEscapes />
</highlighter>
<highlighter type="string">
<string>X'</string>
<endString>'</endString>
<doubleEscapes />
</highlighter>
<highlighter type="number">
<point>.</point>
<pointStarts />
<exponent>e</exponent>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<ignoreCase />
<!-- reserved -->
<keyword>A</keyword>
<keyword>ABS</keyword>
<keyword>ABSOLUTE</keyword>
<keyword>ACTION</keyword>
<keyword>ADA</keyword>
<keyword>ADMIN</keyword>
<keyword>AFTER</keyword>
<keyword>ALWAYS</keyword>
<keyword>ASC</keyword>
<keyword>ASSERTION</keyword>
<keyword>ASSIGNMENT</keyword>
<keyword>ATTRIBUTE</keyword>
<keyword>ATTRIBUTES</keyword>
<keyword>AVG</keyword>
<keyword>BEFORE</keyword>
<keyword>BERNOULLI</keyword>
<keyword>BREADTH</keyword>
<keyword>C</keyword>
<keyword>CARDINALITY</keyword>
<keyword>CASCADE</keyword>
<keyword>CATALOG_NAME</keyword>
<keyword>CATALOG</keyword>
<keyword>CEIL</keyword>
<keyword>CEILING</keyword>
<keyword>CHAIN</keyword>
<keyword>CHAR_LENGTH</keyword>
<keyword>CHARACTER_LENGTH</keyword>
<keyword>CHARACTER_SET_CATALOG</keyword>
<keyword>CHARACTER_SET_NAME</keyword>
<keyword>CHARACTER_SET_SCHEMA</keyword>
<keyword>CHARACTERISTICS</keyword>
<keyword>CHARACTERS</keyword>
<keyword>CHECKED</keyword>
<keyword>CLASS_ORIGIN</keyword>
<keyword>COALESCE</keyword>
<keyword>COBOL</keyword>
<keyword>CODE_UNITS</keyword>
<keyword>COLLATION_CATALOG</keyword>
<keyword>COLLATION_NAME</keyword>
<keyword>COLLATION_SCHEMA</keyword>
<keyword>COLLATION</keyword>
<keyword>COLLECT</keyword>
<keyword>COLUMN_NAME</keyword>
<keyword>COMMAND_FUNCTION_CODE</keyword>
<keyword>COMMAND_FUNCTION</keyword>
<keyword>COMMITTED</keyword>
<keyword>CONDITION_NUMBER</keyword>
<keyword>CONDITION</keyword>
<keyword>CONNECTION_NAME</keyword>
<keyword>CONSTRAINT_CATALOG</keyword>
<keyword>CONSTRAINT_NAME</keyword>
<keyword>CONSTRAINT_SCHEMA</keyword>
<keyword>CONSTRAINTS</keyword>
<keyword>CONSTRUCTORS</keyword>
<keyword>CONTAINS</keyword>
<keyword>CONVERT</keyword>
<keyword>CORR</keyword>
<keyword>COUNT</keyword>
<keyword>COVAR_POP</keyword>
<keyword>COVAR_SAMP</keyword>
<keyword>CUME_DIST</keyword>
<keyword>CURRENT_COLLATION</keyword>
<keyword>CURSOR_NAME</keyword>
<keyword>DATA</keyword>
<keyword>DATETIME_INTERVAL_CODE</keyword>
<keyword>DATETIME_INTERVAL_PRECISION</keyword>
<keyword>DEFAULTS</keyword>
<keyword>DEFERRABLE</keyword>
<keyword>DEFERRED</keyword>
<keyword>DEFINED</keyword>
<keyword>DEFINER</keyword>
<keyword>DEGREE</keyword>
<keyword>DENSE_RANK</keyword>
<keyword>DEPTH</keyword>
<keyword>DERIVED</keyword>
<keyword>DESC</keyword>
<keyword>DESCRIPTOR</keyword>
<keyword>DIAGNOSTICS</keyword>
<keyword>DISPATCH</keyword>
<keyword>DOMAIN</keyword>
<keyword>DYNAMIC_FUNCTION_CODE</keyword>
<keyword>DYNAMIC_FUNCTION</keyword>
<keyword>EQUALS</keyword>
<keyword>EVERY</keyword>
<keyword>EXCEPTION</keyword>
<keyword>EXCLUDE</keyword>
<keyword>EXCLUDING</keyword>
<keyword>EXP</keyword>
<keyword>EXTRACT</keyword>
<keyword>FINAL</keyword>
<keyword>FIRST</keyword>
<keyword>FLOOR</keyword>
<keyword>FOLLOWING</keyword>
<keyword>FORTRAN</keyword>
<keyword>FOUND</keyword>
<keyword>FUSION</keyword>
<keyword>G</keyword>
<keyword>GENERAL</keyword>
<keyword>GO</keyword>
<keyword>GOTO</keyword>
<keyword>GRANTED</keyword>
<keyword>HIERARCHY</keyword>
<keyword>IMPLEMENTATION</keyword>
<keyword>INCLUDING</keyword>
<keyword>INCREMENT</keyword>
<keyword>INITIALLY</keyword>
<keyword>INSTANCE</keyword>
<keyword>INSTANTIABLE</keyword>
<keyword>INTERSECTION</keyword>
<keyword>INVOKER</keyword>
<keyword>ISOLATION</keyword>
<keyword>K</keyword>
<keyword>KEY_MEMBER</keyword>
<keyword>KEY_TYPE</keyword>
<keyword>KEY</keyword>
<keyword>LAST</keyword>
<keyword>LENGTH</keyword>
<keyword>LEVEL</keyword>
<keyword>LN</keyword>
<keyword>LOCATOR</keyword>
<keyword>LOWER</keyword>
<keyword>M</keyword>
<keyword>MAP</keyword>
<keyword>MATCHED</keyword>
<keyword>MAX</keyword>
<keyword>MAXVALUE</keyword>
<keyword>MESSAGE_LENGTH</keyword>
<keyword>MESSAGE_OCTET_LENGTH</keyword>
<keyword>MESSAGE_TEXT</keyword>
<keyword>MIN</keyword>
<keyword>MINVALUE</keyword>
<keyword>MOD</keyword>
<keyword>MORE</keyword>
<keyword>MUMPS</keyword>
<keyword>NAME</keyword>
<keyword>NAMES</keyword>
<keyword>NESTING</keyword>
<keyword>NEXT</keyword>
<keyword>NORMALIZE</keyword>
<keyword>NORMALIZED</keyword>
<keyword>NULLABLE</keyword>
<keyword>NULLIF</keyword>
<keyword>NULLS</keyword>
<keyword>NUMBER</keyword>
<keyword>OBJECT</keyword>
<keyword>OCTET_LENGTH</keyword>
<keyword>OCTETS</keyword>
<keyword>OPTION</keyword>
<keyword>OPTIONS</keyword>
<keyword>ORDERING</keyword>
<keyword>ORDINALITY</keyword>
<keyword>OTHERS</keyword>
<keyword>OVERLAY</keyword>
<keyword>OVERRIDING</keyword>
<keyword>PAD</keyword>
<keyword>PARAMETER_MODE</keyword>
<keyword>PARAMETER_NAME</keyword>
<keyword>PARAMETER_ORDINAL_POSITION</keyword>
<keyword>PARAMETER_SPECIFIC_CATALOG</keyword>
<keyword>PARAMETER_SPECIFIC_NAME</keyword>
<keyword>PARAMETER_SPECIFIC_SCHEMA</keyword>
<keyword>PARTIAL</keyword>
<keyword>PASCAL</keyword>
<keyword>PATH</keyword>
<keyword>PERCENT_RANK</keyword>
<keyword>PERCENTILE_CONT</keyword>
<keyword>PERCENTILE_DISC</keyword>
<keyword>PLACING</keyword>
<keyword>PLI</keyword>
<keyword>POSITION</keyword>
<keyword>POWER</keyword>
<keyword>PRECEDING</keyword>
<keyword>PRESERVE</keyword>
<keyword>PRIOR</keyword>
<keyword>PRIVILEGES</keyword>
<keyword>PUBLIC</keyword>
<keyword>RANK</keyword>
<keyword>READ</keyword>
<keyword>RELATIVE</keyword>
<keyword>REPEATABLE</keyword>
<keyword>RESTART</keyword>
<keyword>RETURNED_CARDINALITY</keyword>
<keyword>RETURNED_LENGTH</keyword>
<keyword>RETURNED_OCTET_LENGTH</keyword>
<keyword>RETURNED_SQLSTATE</keyword>
<keyword>ROLE</keyword>
<keyword>ROUTINE_CATALOG</keyword>
<keyword>ROUTINE_NAME</keyword>
<keyword>ROUTINE_SCHEMA</keyword>
<keyword>ROUTINE</keyword>
<keyword>ROW_COUNT</keyword>
<keyword>ROW_NUMBER</keyword>
<keyword>SCALE</keyword>
<keyword>SCHEMA_NAME</keyword>
<keyword>SCHEMA</keyword>
<keyword>SCOPE_CATALOG</keyword>
<keyword>SCOPE_NAME</keyword>
<keyword>SCOPE_SCHEMA</keyword>
<keyword>SECTION</keyword>
<keyword>SECURITY</keyword>
<keyword>SELF</keyword>
<keyword>SEQUENCE</keyword>
<keyword>SERIALIZABLE</keyword>
<keyword>SERVER_NAME</keyword>
<keyword>SESSION</keyword>
<keyword>SETS</keyword>
<keyword>SIMPLE</keyword>
<keyword>SIZE</keyword>
<keyword>SOURCE</keyword>
<keyword>SPACE</keyword>
<keyword>SPECIFIC_NAME</keyword>
<keyword>SQRT</keyword>
<keyword>STATE</keyword>
<keyword>STATEMENT</keyword>
<keyword>STDDEV_POP</keyword>
<keyword>STDDEV_SAMP</keyword>
<keyword>STRUCTURE</keyword>
<keyword>STYLE</keyword>
<keyword>SUBCLASS_ORIGIN</keyword>
<keyword>SUBSTRING</keyword>
<keyword>SUM</keyword>
<keyword>TABLE_NAME</keyword>
<keyword>TABLESAMPLE</keyword>
<keyword>TEMPORARY</keyword>
<keyword>TIES</keyword>
<keyword>TOP_LEVEL_COUNT</keyword>
<keyword>TRANSACTION_ACTIVE</keyword>
<keyword>TRANSACTION</keyword>
<keyword>TRANSACTIONS_COMMITTED</keyword>
<keyword>TRANSACTIONS_ROLLED_BACK</keyword>
<keyword>TRANSFORM</keyword>
<keyword>TRANSFORMS</keyword>
<keyword>TRANSLATE</keyword>
<keyword>TRIGGER_CATALOG</keyword>
<keyword>TRIGGER_NAME</keyword>
<keyword>TRIGGER_SCHEMA</keyword>
<keyword>TRIM</keyword>
<keyword>TYPE</keyword>
<keyword>UNBOUNDED</keyword>
<keyword>UNCOMMITTED</keyword>
<keyword>UNDER</keyword>
<keyword>UNNAMED</keyword>
<keyword>USAGE</keyword>
<keyword>USER_DEFINED_TYPE_CATALOG</keyword>
<keyword>USER_DEFINED_TYPE_CODE</keyword>
<keyword>USER_DEFINED_TYPE_NAME</keyword>
<keyword>USER_DEFINED_TYPE_SCHEMA</keyword>
<keyword>VIEW</keyword>
<keyword>WORK</keyword>
<keyword>WRITE</keyword>
<keyword>ZONE</keyword>
<!-- non reserved -->
<keyword>ADD</keyword>
<keyword>ALL</keyword>
<keyword>ALLOCATE</keyword>
<keyword>ALTER</keyword>
<keyword>AND</keyword>
<keyword>ANY</keyword>
<keyword>ARE</keyword>
<keyword>ARRAY</keyword>
<keyword>AS</keyword>
<keyword>ASENSITIVE</keyword>
<keyword>ASYMMETRIC</keyword>
<keyword>AT</keyword>
<keyword>ATOMIC</keyword>
<keyword>AUTHORIZATION</keyword>
<keyword>BEGIN</keyword>
<keyword>BETWEEN</keyword>
<keyword>BIGINT</keyword>
<keyword>BINARY</keyword>
<keyword>BLOB</keyword>
<keyword>BOOLEAN</keyword>
<keyword>BOTH</keyword>
<keyword>BY</keyword>
<keyword>CALL</keyword>
<keyword>CALLED</keyword>
<keyword>CASCADED</keyword>
<keyword>CASE</keyword>
<keyword>CAST</keyword>
<keyword>CHAR</keyword>
<keyword>CHARACTER</keyword>
<keyword>CHECK</keyword>
<keyword>CLOB</keyword>
<keyword>CLOSE</keyword>
<keyword>COLLATE</keyword>
<keyword>COLUMN</keyword>
<keyword>COMMIT</keyword>
<keyword>CONNECT</keyword>
<keyword>CONSTRAINT</keyword>
<keyword>CONTINUE</keyword>
<keyword>CORRESPONDING</keyword>
<keyword>CREATE</keyword>
<keyword>CROSS</keyword>
<keyword>CUBE</keyword>
<keyword>CURRENT_DATE</keyword>
<keyword>CURRENT_DEFAULT_TRANSFORM_GROUP</keyword>
<keyword>CURRENT_PATH</keyword>
<keyword>CURRENT_ROLE</keyword>
<keyword>CURRENT_TIME</keyword>
<keyword>CURRENT_TIMESTAMP</keyword>
<keyword>CURRENT_TRANSFORM_GROUP_FOR_TYPE</keyword>
<keyword>CURRENT_USER</keyword>
<keyword>CURRENT</keyword>
<keyword>CURSOR</keyword>
<keyword>CYCLE</keyword>
<keyword>DATE</keyword>
<keyword>DAY</keyword>
<keyword>DEALLOCATE</keyword>
<keyword>DEC</keyword>
<keyword>DECIMAL</keyword>
<keyword>DECLARE</keyword>
<keyword>DEFAULT</keyword>
<keyword>DELETE</keyword>
<keyword>DEREF</keyword>
<keyword>DESCRIBE</keyword>
<keyword>DETERMINISTIC</keyword>
<keyword>DISCONNECT</keyword>
<keyword>DISTINCT</keyword>
<keyword>DOUBLE</keyword>
<keyword>DROP</keyword>
<keyword>DYNAMIC</keyword>
<keyword>EACH</keyword>
<keyword>ELEMENT</keyword>
<keyword>ELSE</keyword>
<keyword>END</keyword>
<keyword>END-EXEC</keyword>
<keyword>ESCAPE</keyword>
<keyword>EXCEPT</keyword>
<keyword>EXEC</keyword>
<keyword>EXECUTE</keyword>
<keyword>EXISTS</keyword>
<keyword>EXTERNAL</keyword>
<keyword>FALSE</keyword>
<keyword>FETCH</keyword>
<keyword>FILTER</keyword>
<keyword>FLOAT</keyword>
<keyword>FOR</keyword>
<keyword>FOREIGN</keyword>
<keyword>FREE</keyword>
<keyword>FROM</keyword>
<keyword>FULL</keyword>
<keyword>FUNCTION</keyword>
<keyword>GET</keyword>
<keyword>GLOBAL</keyword>
<keyword>GRANT</keyword>
<keyword>GROUP</keyword>
<keyword>GROUPING</keyword>
<keyword>HAVING</keyword>
<keyword>HOLD</keyword>
<keyword>HOUR</keyword>
<keyword>IDENTITY</keyword>
<keyword>IMMEDIATE</keyword>
<keyword>IN</keyword>
<keyword>INDICATOR</keyword>
<keyword>INNER</keyword>
<keyword>INOUT</keyword>
<keyword>INPUT</keyword>
<keyword>INSENSITIVE</keyword>
<keyword>INSERT</keyword>
<keyword>INT</keyword>
<keyword>INTEGER</keyword>
<keyword>INTERSECT</keyword>
<keyword>INTERVAL</keyword>
<keyword>INTO</keyword>
<keyword>IS</keyword>
<keyword>ISOLATION</keyword>
<keyword>JOIN</keyword>
<keyword>LANGUAGE</keyword>
<keyword>LARGE</keyword>
<keyword>LATERAL</keyword>
<keyword>LEADING</keyword>
<keyword>LEFT</keyword>
<keyword>LIKE</keyword>
<keyword>LOCAL</keyword>
<keyword>LOCALTIME</keyword>
<keyword>LOCALTIMESTAMP</keyword>
<keyword>MATCH</keyword>
<keyword>MEMBER</keyword>
<keyword>MERGE</keyword>
<keyword>METHOD</keyword>
<keyword>MINUTE</keyword>
<keyword>MODIFIES</keyword>
<keyword>MODULE</keyword>
<keyword>MONTH</keyword>
<keyword>MULTISET</keyword>
<keyword>NATIONAL</keyword>
<keyword>NATURAL</keyword>
<keyword>NCHAR</keyword>
<keyword>NCLOB</keyword>
<keyword>NEW</keyword>
<keyword>NO</keyword>
<keyword>NONE</keyword>
<keyword>NOT</keyword>
<keyword>NULL</keyword>
<keyword>NUMERIC</keyword>
<keyword>OF</keyword>
<keyword>OLD</keyword>
<keyword>ON</keyword>
<keyword>ONLY</keyword>
<keyword>OPEN</keyword>
<keyword>OR</keyword>
<keyword>ORDER</keyword>
<keyword>OUT</keyword>
<keyword>OUTER</keyword>
<keyword>OUTPUT</keyword>
<keyword>OVER</keyword>
<keyword>OVERLAPS</keyword>
<keyword>PARAMETER</keyword>
<keyword>PARTITION</keyword>
<keyword>PRECISION</keyword>
<keyword>PREPARE</keyword>
<keyword>PRIMARY</keyword>
<keyword>PROCEDURE</keyword>
<keyword>RANGE</keyword>
<keyword>READS</keyword>
<keyword>REAL</keyword>
<keyword>RECURSIVE</keyword>
<keyword>REF</keyword>
<keyword>REFERENCES</keyword>
<keyword>REFERENCING</keyword>
<keyword>REGR_AVGX</keyword>
<keyword>REGR_AVGY</keyword>
<keyword>REGR_COUNT</keyword>
<keyword>REGR_INTERCEPT</keyword>
<keyword>REGR_R2</keyword>
<keyword>REGR_SLOPE</keyword>
<keyword>REGR_SXX</keyword>
<keyword>REGR_SXY</keyword>
<keyword>REGR_SYY</keyword>
<keyword>RELEASE</keyword>
<keyword>RESULT</keyword>
<keyword>RETURN</keyword>
<keyword>RETURNS</keyword>
<keyword>REVOKE</keyword>
<keyword>RIGHT</keyword>
<keyword>ROLLBACK</keyword>
<keyword>ROLLUP</keyword>
<keyword>ROW</keyword>
<keyword>ROWS</keyword>
<keyword>SAVEPOINT</keyword>
<keyword>SCROLL</keyword>
<keyword>SEARCH</keyword>
<keyword>SECOND</keyword>
<keyword>SELECT</keyword>
<keyword>SENSITIVE</keyword>
<keyword>SESSION_USER</keyword>
<keyword>SET</keyword>
<keyword>SIMILAR</keyword>
<keyword>SMALLINT</keyword>
<keyword>SOME</keyword>
<keyword>SPECIFIC</keyword>
<keyword>SPECIFICTYPE</keyword>
<keyword>SQL</keyword>
<keyword>SQLEXCEPTION</keyword>
<keyword>SQLSTATE</keyword>
<keyword>SQLWARNING</keyword>
<keyword>START</keyword>
<keyword>STATIC</keyword>
<keyword>SUBMULTISET</keyword>
<keyword>SYMMETRIC</keyword>
<keyword>SYSTEM_USER</keyword>
<keyword>SYSTEM</keyword>
<keyword>TABLE</keyword>
<keyword>THEN</keyword>
<keyword>TIME</keyword>
<keyword>TIMESTAMP</keyword>
<keyword>TIMEZONE_HOUR</keyword>
<keyword>TIMEZONE_MINUTE</keyword>
<keyword>TO</keyword>
<keyword>TRAILING</keyword>
<keyword>TRANSLATION</keyword>
<keyword>TREAT</keyword>
<keyword>TRIGGER</keyword>
<keyword>TRUE</keyword>
<keyword>UESCAPE</keyword>
<keyword>UNION</keyword>
<keyword>UNIQUE</keyword>
<keyword>UNKNOWN</keyword>
<keyword>UNNEST</keyword>
<keyword>UPDATE</keyword>
<keyword>UPPER</keyword>
<keyword>USER</keyword>
<keyword>USING</keyword>
<keyword>VALUE</keyword>
<keyword>VALUES</keyword>
<keyword>VAR_POP</keyword>
<keyword>VAR_SAMP</keyword>
<keyword>VARCHAR</keyword>
<keyword>VARYING</keyword>
<keyword>WHEN</keyword>
<keyword>WHENEVER</keyword>
<keyword>WHERE</keyword>
<keyword>WIDTH_BUCKET</keyword>
<keyword>WINDOW</keyword>
<keyword>WITH</keyword>
<keyword>WITHIN</keyword>
<keyword>WITHOUT</keyword>
<keyword>YEAR</keyword>
</highlighter>
</highlighters>

View File

@@ -1,47 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<highlighters>
<highlighter type="oneline-comment">#</highlighter>
<highlighter type="string">
<string>"</string>
<escape>\</escape>
</highlighter>
<highlighter type="string">
<string>'</string>
<escape>\</escape>
</highlighter>
<highlighter type="annotation">
<start>@</start>
<valueStart>(</valueStart>
<valueEnd>)</valueEnd>
</highlighter>
<highlighter type="number">
<point>.</point>
<exponent>e</exponent>
<suffix>f</suffix>
<suffix>d</suffix>
<suffix>l</suffix>
<ignoreCase />
</highlighter>
<highlighter type="keywords">
<keyword>true</keyword>
<keyword>false</keyword>
</highlighter>
<highlighter type="word">
<word>{</word>
<word>}</word>
<word>,</word>
<word>[</word>
<word>]</word>
<style>keyword</style>
</highlighter>
<highlighter type="regex">
<pattern>^(---)$</pattern>
<style>comment</style>
<flags>MULTILINE</flags>
</highlighter>
<highlighter type="regex">
<pattern>^(.+?)(?==|:)</pattern>
<style>attribute</style>
<flags>MULTILINE</flags>
</highlighter>
</highlighters>

View File

@@ -1,599 +0,0 @@
/* Javadoc style sheet */
/*
Overall document style
*/
@import url('resources/fonts/dejavu.css');
body {
background-color:#ffffff;
color:#353833;
font-family:'DejaVu Sans', Arial, Helvetica, sans-serif;
font-size:14px;
margin:0;
}
a:link, a:visited {
text-decoration:none;
color:#4A6782;
}
a:hover, a:focus {
text-decoration:none;
color:#bb7a2a;
}
a:active {
text-decoration:none;
color:#4A6782;
}
a[name] {
color:#353833;
}
a[name]:hover {
text-decoration:none;
color:#353833;
}
pre {
font-family:'DejaVu Sans Mono', monospace;
font-size:14px;
}
h1 {
font-size:20px;
}
h2 {
font-size:18px;
}
h3 {
font-size:16px;
font-style:italic;
}
h4 {
font-size:13px;
}
h5 {
font-size:12px;
}
h6 {
font-size:11px;
}
ul {
list-style-type:disc;
}
code, tt {
font-family:'DejaVu Sans Mono', monospace;
font-size:14px;
padding-top:4px;
margin-top:8px;
line-height:1.4em;
}
dt code {
font-family:'DejaVu Sans Mono', monospace;
font-size:14px;
padding-top:4px;
}
table tr td dt code {
font-family:'DejaVu Sans Mono', monospace;
font-size:14px;
vertical-align:top;
padding-top:4px;
}
sup {
font-size:8px;
}
/*
Document title and Copyright styles
*/
.clear {
clear:both;
height:0px;
overflow:hidden;
}
.aboutLanguage {
float:right;
padding:0px 21px;
font-size:11px;
z-index:200;
margin-top:-9px;
}
.legalCopy {
margin-left:.5em;
}
.bar a, .bar a:link, .bar a:visited, .bar a:active {
color:#FFFFFF;
text-decoration:none;
}
.bar a:hover, .bar a:focus {
color:#bb7a2a;
}
.tab {
background-color:#0066FF;
color:#ffffff;
padding:8px;
width:5em;
font-weight:bold;
}
/*
Navigation bar styles
*/
.bar {
background-color:#4D7A97;
color:#FFFFFF;
padding:.8em .5em .4em .8em;
height:auto;/*height:1.8em;*/
font-size:11px;
margin:0;
}
.topNav {
background-color:#4D7A97;
color:#FFFFFF;
float:left;
padding:0;
width:100%;
clear:right;
height:2.8em;
padding-top:10px;
overflow:hidden;
font-size:12px;
}
.bottomNav {
margin-top:10px;
background-color:#4D7A97;
color:#FFFFFF;
float:left;
padding:0;
width:100%;
clear:right;
height:2.8em;
padding-top:10px;
overflow:hidden;
font-size:12px;
}
.subNav {
background-color:#dee3e9;
float:left;
width:100%;
overflow:hidden;
font-size:12px;
}
.subNav div {
clear:left;
float:left;
padding:0 0 5px 6px;
text-transform:uppercase;
}
ul.navList, ul.subNavList {
float:left;
margin:0 25px 0 0;
padding:0;
}
ul.navList li{
list-style:none;
float:left;
padding: 5px 6px;
text-transform:uppercase;
}
ul.subNavList li{
list-style:none;
float:left;
}
.topNav a:link, .topNav a:active, .topNav a:visited, .bottomNav a:link, .bottomNav a:active, .bottomNav a:visited {
color:#FFFFFF;
text-decoration:none;
text-transform:uppercase;
}
.topNav a:hover, .bottomNav a:hover {
text-decoration:none;
color:#bb7a2a;
text-transform:uppercase;
}
.navBarCell1Rev {
background-color:#F8981D;
color:#253441;
margin: auto 5px;
}
.skipNav {
position:absolute;
top:auto;
left:-9999px;
overflow:hidden;
}
/*
Page header and footer styles
*/
.header, .footer {
clear:both;
margin:0 20px;
padding:5px 0 0 0;
}
.indexHeader {
margin:10px;
position:relative;
}
.indexHeader span{
margin-right:15px;
}
.indexHeader h1 {
font-size:13px;
}
.title {
color:#2c4557;
margin:10px 0;
}
.subTitle {
margin:5px 0 0 0;
}
.header ul {
margin:0 0 15px 0;
padding:0;
}
.footer ul {
margin:20px 0 5px 0;
}
.header ul li, .footer ul li {
list-style:none;
font-size:13px;
}
/*
Heading styles
*/
div.details ul.blockList ul.blockList ul.blockList li.blockList h4, div.details ul.blockList ul.blockList ul.blockListLast li.blockList h4 {
background-color:#dee3e9;
border:1px solid #d0d9e0;
margin:0 0 6px -8px;
padding:7px 5px;
}
ul.blockList ul.blockList ul.blockList li.blockList h3 {
background-color:#dee3e9;
border:1px solid #d0d9e0;
margin:0 0 6px -8px;
padding:7px 5px;
}
ul.blockList ul.blockList li.blockList h3 {
padding:0;
margin:15px 0;
}
ul.blockList li.blockList h2 {
padding:0px 0 20px 0;
}
/*
Page layout container styles
*/
.contentContainer, .sourceContainer, .classUseContainer, .serializedFormContainer, .constantValuesContainer {
clear:both;
padding:10px 20px;
position:relative;
}
.indexContainer {
margin:10px;
position:relative;
font-size:12px;
}
.indexContainer h2 {
font-size:13px;
padding:0 0 3px 0;
}
.indexContainer ul {
margin:0;
padding:0;
}
.indexContainer ul li {
list-style:none;
padding-top:2px;
}
.contentContainer .description dl dt, .contentContainer .details dl dt, .serializedFormContainer dl dt {
font-size:12px;
font-weight:bold;
margin:10px 0 0 0;
color:#4E4E4E;
}
.contentContainer .description dl dd, .contentContainer .details dl dd, .serializedFormContainer dl dd {
margin:5px 0 10px 0px;
font-size:14px;
font-family:'DejaVu Sans Mono',monospace;
}
.serializedFormContainer dl.nameValue dt {
margin-left:1px;
font-size:1.1em;
display:inline;
font-weight:bold;
}
.serializedFormContainer dl.nameValue dd {
margin:0 0 0 1px;
font-size:1.1em;
display:inline;
}
/*
List styles
*/
ul.horizontal li {
display:inline;
font-size:0.9em;
}
ul.inheritance {
margin:0;
padding:0;
}
ul.inheritance li {
display:inline;
list-style:none;
}
ul.inheritance li ul.inheritance {
margin-left:15px;
padding-left:15px;
padding-top:1px;
}
ul.blockList, ul.blockListLast {
margin:10px 0 10px 0;
padding:0;
}
ul.blockList li.blockList, ul.blockListLast li.blockList {
list-style:none;
margin-bottom:15px;
line-height:1.4;
}
ul.blockList ul.blockList li.blockList, ul.blockList ul.blockListLast li.blockList {
padding:0px 20px 5px 10px;
border:1px solid #ededed;
background-color:#f8f8f8;
}
ul.blockList ul.blockList ul.blockList li.blockList, ul.blockList ul.blockList ul.blockListLast li.blockList {
padding:0 0 5px 8px;
background-color:#ffffff;
border:none;
}
ul.blockList ul.blockList ul.blockList ul.blockList li.blockList {
margin-left:0;
padding-left:0;
padding-bottom:15px;
border:none;
}
ul.blockList ul.blockList ul.blockList ul.blockList li.blockListLast {
list-style:none;
border-bottom:none;
padding-bottom:0;
}
table tr td dl, table tr td dl dt, table tr td dl dd {
margin-top:0;
margin-bottom:1px;
}
/*
Table styles
*/
.overviewSummary, .memberSummary, .typeSummary, .useSummary, .constantsSummary, .deprecatedSummary {
width:100%;
border-left:1px solid #EEE;
border-right:1px solid #EEE;
border-bottom:1px solid #EEE;
}
.overviewSummary, .memberSummary {
padding:0px;
}
.overviewSummary caption, .memberSummary caption, .typeSummary caption,
.useSummary caption, .constantsSummary caption, .deprecatedSummary caption {
position:relative;
text-align:left;
background-repeat:no-repeat;
color:#253441;
font-weight:bold;
clear:none;
overflow:hidden;
padding:0px;
padding-top:10px;
padding-left:1px;
margin:0px;
white-space:pre;
}
.overviewSummary caption a:link, .memberSummary caption a:link, .typeSummary caption a:link,
.useSummary caption a:link, .constantsSummary caption a:link, .deprecatedSummary caption a:link,
.overviewSummary caption a:hover, .memberSummary caption a:hover, .typeSummary caption a:hover,
.useSummary caption a:hover, .constantsSummary caption a:hover, .deprecatedSummary caption a:hover,
.overviewSummary caption a:active, .memberSummary caption a:active, .typeSummary caption a:active,
.useSummary caption a:active, .constantsSummary caption a:active, .deprecatedSummary caption a:active,
.overviewSummary caption a:visited, .memberSummary caption a:visited, .typeSummary caption a:visited,
.useSummary caption a:visited, .constantsSummary caption a:visited, .deprecatedSummary caption a:visited {
color:#FFFFFF;
}
.overviewSummary caption span, .memberSummary caption span, .typeSummary caption span,
.useSummary caption span, .constantsSummary caption span, .deprecatedSummary caption span {
white-space:nowrap;
padding-top:5px;
padding-left:12px;
padding-right:12px;
padding-bottom:7px;
display:inline-block;
float:left;
background-color:#F8981D;
border: none;
height:16px;
}
.memberSummary caption span.activeTableTab span {
white-space:nowrap;
padding-top:5px;
padding-left:12px;
padding-right:12px;
margin-right:3px;
display:inline-block;
float:left;
background-color:#F8981D;
height:16px;
}
.memberSummary caption span.tableTab span {
white-space:nowrap;
padding-top:5px;
padding-left:12px;
padding-right:12px;
margin-right:3px;
display:inline-block;
float:left;
background-color:#4D7A97;
height:16px;
}
.memberSummary caption span.tableTab, .memberSummary caption span.activeTableTab {
padding-top:0px;
padding-left:0px;
padding-right:0px;
background-image:none;
float:none;
display:inline;
}
.overviewSummary .tabEnd, .memberSummary .tabEnd, .typeSummary .tabEnd,
.useSummary .tabEnd, .constantsSummary .tabEnd, .deprecatedSummary .tabEnd {
display:none;
width:5px;
position:relative;
float:left;
background-color:#F8981D;
}
.memberSummary .activeTableTab .tabEnd {
display:none;
width:5px;
margin-right:3px;
position:relative;
float:left;
background-color:#F8981D;
}
.memberSummary .tableTab .tabEnd {
display:none;
width:5px;
margin-right:3px;
position:relative;
background-color:#4D7A97;
float:left;
}
.overviewSummary td, .memberSummary td, .typeSummary td,
.useSummary td, .constantsSummary td, .deprecatedSummary td {
text-align:left;
padding:0px 0px 12px 10px;
width:100%;
}
th.colOne, th.colFirst, th.colLast, .useSummary th, .constantsSummary th,
td.colOne, td.colFirst, td.colLast, .useSummary td, .constantsSummary td{
vertical-align:top;
padding-right:0px;
padding-top:8px;
padding-bottom:3px;
}
th.colFirst, th.colLast, th.colOne, .constantsSummary th {
background:#dee3e9;
text-align:left;
padding:8px 3px 3px 7px;
}
td.colFirst, th.colFirst {
white-space:nowrap;
font-size:13px;
}
td.colLast, th.colLast {
font-size:13px;
}
td.colOne, th.colOne {
font-size:13px;
}
.overviewSummary td.colFirst, .overviewSummary th.colFirst,
.overviewSummary td.colOne, .overviewSummary th.colOne,
.memberSummary td.colFirst, .memberSummary th.colFirst,
.memberSummary td.colOne, .memberSummary th.colOne,
.typeSummary td.colFirst{
width:25%;
vertical-align:top;
}
td.colOne a:link, td.colOne a:active, td.colOne a:visited, td.colOne a:hover, td.colFirst a:link, td.colFirst a:active, td.colFirst a:visited, td.colFirst a:hover, td.colLast a:link, td.colLast a:active, td.colLast a:visited, td.colLast a:hover, .constantValuesContainer td a:link, .constantValuesContainer td a:active, .constantValuesContainer td a:visited, .constantValuesContainer td a:hover {
font-weight:bold;
}
.tableSubHeadingColor {
background-color:#EEEEFF;
}
.altColor {
background-color:#FFFFFF;
}
.rowColor {
background-color:#EEEEEF;
}
/*
Content styles
*/
.description pre {
margin-top:0;
}
.deprecatedContent {
margin:0;
padding:10px 0;
}
.docSummary {
padding:0;
}
ul.blockList ul.blockList ul.blockList li.blockList h3 {
font-style:normal;
}
div.block {
font-size:14px;
font-family:'DejaVu Serif', Georgia, "Times New Roman", Times, serif;
}
td.colLast div {
padding-top:0px;
}
td.colLast a {
padding-bottom:3px;
}
/*
Formatting effect styles
*/
.sourceLineNo {
color:green;
padding:0 30px 0 0;
}
h1.hidden {
visibility:hidden;
overflow:hidden;
font-size:10px;
}
.block {
display:block;
margin:3px 10px 2px 0px;
color:#474747;
}
.deprecatedLabel, .descfrmTypeLabel, .memberNameLabel, .memberNameLink,
.overrideSpecifyLabel, .packageHierarchyLabel, .paramLabel, .returnLabel,
.seeLabel, .simpleTagLabel, .throwsLabel, .typeNameLabel, .typeNameLink {
font-weight:bold;
}
.deprecationComment, .emphasizedPhrase, .interfaceName {
font-style:italic;
}
div.block div.block span.deprecationComment, div.block div.block span.emphasizedPhrase,
div.block div.block span.interfaceName {
font-style:normal;
}
div.contentContainer ul.blockList li.blockList h2{
padding-bottom:0px;
}
/*
Spring
*/
pre.code {
background-color: #F8F8F8;
border: 1px solid #CCCCCC;
border-radius: 3px 3px 3px 3px;
overflow: auto;
padding: 10px;
margin: 4px 20px 2px 0px;
}
pre.code code, pre.code code * {
font-size: 1em;
}
pre.code code, pre.code code * {
padding: 0 !important;
margin: 0 !important;
}

View File

@@ -1,28 +0,0 @@
<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:mvn="http://maven.apache.org/POM/4.0.0"
version="1.0">
<xsl:output method="text" encoding="UTF-8" indent="no"/>
<xsl:template match="/">
<xsl:text>|===&#xa;</xsl:text>
<xsl:text>| Group ID | Artifact ID | Version&#xa;</xsl:text>
<xsl:for-each select="//mvn:dependency">
<xsl:sort select="mvn:groupId"/>
<xsl:sort select="mvn:artifactId"/>
<xsl:text>&#xa;</xsl:text>
<xsl:text>| `</xsl:text>
<xsl:copy-of select="mvn:groupId"/>
<xsl:text>`&#xa;</xsl:text>
<xsl:text>| `</xsl:text>
<xsl:copy-of select="mvn:artifactId"/>
<xsl:text>`&#xa;</xsl:text>
<xsl:text>| </xsl:text>
<xsl:copy-of select="mvn:version"/>
<xsl:text>&#xa;</xsl:text>
</xsl:for-each>
<xsl:text>|===</xsl:text>
</xsl:template>
</xsl:stylesheet>

View File

@@ -0,0 +1,117 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
<packaging>jar</packaging>
<name>spring-cloud-stream-binder-kafka-streams</name>
<description>Kafka Streams Binder Implementation</description>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>3.1.0.M1</version>
</parent>
<properties>
<avro.version>1.8.2</avro.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure-processor</artifactId>
<optional>true</optional>
</dependency>
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<!-- Following dependencies are only provided for testing and won't be packaged with the binder apps-->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-schema-registry-client</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
<version>${avro.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<executions>
<execution>
<phase>generate-test-sources</phase>
<goals>
<goal>schema</goal>
</goals>
<configuration>
<outputDirectory>${project.basedir}/target/generated-test-sources</outputDirectory>
<testOutputDirectory>${project.basedir}/target/generated-test-sources</testOutputDirectory>
<testSourceDirectory>${project.basedir}/src/test/resources/avro</testSourceDirectory>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@@ -0,0 +1,472 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Arrays;
import java.util.Map;
import java.util.regex.Pattern;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.processor.TimestampExtractor;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 3.0.0
*/
public abstract class AbstractKafkaStreamsBinderProcessor implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(AbstractKafkaStreamsBinderProcessor.class);
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final CleanupConfig cleanupConfig;
private final KeyValueSerdeResolver keyValueSerdeResolver;
protected ConfigurableApplicationContext applicationContext;
public AbstractKafkaStreamsBinderProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.cleanupConfig = cleanupConfig;
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected Topology.AutoOffsetReset getAutoOffsetReset(String inboundName, KafkaStreamsConsumerProperties extendedConsumerProperties) {
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
AbstractKafkaStreamsBinderProcessor.LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
return autoOffsetReset;
}
@SuppressWarnings("unchecked")
protected void handleKTableGlobalKTableInputs(Object[] arguments, int index, String input, Class<?> parameterType, Object targetBean,
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(extendedConsumerProperties, streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
}
@SuppressWarnings({ "unchecked" })
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
ApplicationContext applicationContext, String inboundName,
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
StreamsBuilderFactoryBeanCustomizer customizer,
ConfigurableEnvironment environment, BindingProperties bindingProperties) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
if (kafkaStreamsBinderConfigurationProperties != null) {
final Map<String, KafkaStreamsBinderConfigurationProperties.Functions> functionConfigMap = kafkaStreamsBinderConfigurationProperties.getFunctions();
if (!CollectionUtils.isEmpty(functionConfigMap)) {
final KafkaStreamsBinderConfigurationProperties.Functions functionConfig = functionConfigMap.get(beanNamePostPrefix);
final Map<String, String> functionSpecificConfig = functionConfig.getConfiguration();
if (!CollectionUtils.isEmpty(functionSpecificConfig)) {
streamConfigGlobalProperties.putAll(functionSpecificConfig);
}
String applicationId = functionConfig.getApplicationId();
if (!StringUtils.isEmpty(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
}
}
final MutablePropertySources propertySources = environment.getPropertySources();
if (!StringUtils.isEmpty(bindingProperties.getBinder())) {
final KafkaStreamsBinderConfigurationProperties multiBinderKafkaStreamsBinderConfigurationProperties =
applicationContext.getBean(bindingProperties.getBinder() + "-KafkaStreamsBinderConfigurationProperties", KafkaStreamsBinderConfigurationProperties.class);
String connectionString = multiBinderKafkaStreamsBinderConfigurationProperties.getKafkaConnectionString();
if (StringUtils.isEmpty(connectionString)) {
connectionString = (String) propertySources.get(bindingProperties.getBinder() + "-kafkaStreamsBinderEnv").getProperty("spring.cloud.stream.kafka.binder.brokers");
}
else {
streamConfigGlobalProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, connectionString);
}
String binderProvidedApplicationId = multiBinderKafkaStreamsBinderConfigurationProperties.getApplicationId();
if (StringUtils.hasText(binderProvidedApplicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
binderProvidedApplicationId);
}
if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndContinue) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.logAndFail) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (multiBinderKafkaStreamsBinderConfigurationProperties
.getDeserializationExceptionHandler() == DeserializationExceptionHandler.sendToDlq) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
SendToDlqAndContinue sendToDlqAndContinue = applicationContext.getBean(SendToDlqAndContinue.class);
streamConfigGlobalProperties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER, sendToDlqAndContinue);
}
if (!ObjectUtils.isEmpty(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration())) {
streamConfigGlobalProperties.putAll(multiBinderKafkaStreamsBinderConfigurationProperties.getConfiguration());
}
}
//this is only used primarily for StreamListener based processors. Although in theory, functions can use it,
//it is ideal for functions to use the approach used in the above if statement by using a property like
//spring.cloud.stream.kafka.streams.binder.functions.process.configuration.num.threads (assuming that process is the function name).
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
// We provide this for backward compatibility with StreamListener based processors.
// For function based processors see the approach used above
// (i.e. use a property like spring.cloud.stream.kafka.streams.binder.functions.process.applicationId).
if (StringUtils.hasText(bindingLevelApplicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
bindingLevelApplicationId);
}
//If the application id is not set by any mechanism, then generate it.
streamConfigGlobalProperties.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
k -> {
String generatedApplicationID = beanNamePostPrefix + "-applicationId";
LOG.info("Binder Generated Kafka Streams Application ID: " + generatedApplicationID);
LOG.info("Use the binder generated application ID only for development and testing. ");
LOG.info("For production deployments, please consider explicitly setting an application ID using a configuration property.");
LOG.info("The generated applicationID is static and will be preserved over application restarts.");
return generatedApplicationID;
});
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
// Concurrency will be mapped to num.stream.threads. Since this is going into a global config,
// we are explicitly assigning concurrency left at default of 1 to num.stream.threads. Otherwise,
// a potential previous value might still be used in the case of multiple processors or a processor
// with multiple input bindings with various concurrency values.
// See this GH issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/844
if (concurrency >= 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
// Override deserialization exception handlers per binding
final DeserializationExceptionHandler deserializationExceptionHandler =
extendedConsumerProperties.getDeserializationExceptionHandler();
if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndFail) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.logAndContinue) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
}
else if (deserializationExceptionHandler == DeserializationExceptionHandler.sendToDlq) {
streamConfigGlobalProperties.put(
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
RecoveringDeserializationExceptionHandler.class);
streamConfigGlobalProperties.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER,
applicationContext.getBean(SendToDlqAndContinue.class));
}
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfigGlobalProperties);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilderFactoryBean.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilderFactoryBean.getClass(),
() -> streamsBuilderFactoryBean)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
extendedConsumerProperties.setApplicationId((String) streamConfigGlobalProperties.get(StreamsConfig.APPLICATION_ID_CONFIG));
//Removing the application ID from global properties so that the next function won't re-use it and cause race conditions.
streamConfigGlobalProperties.remove(StreamsConfig.APPLICATION_ID_CONFIG);
final StreamsBuilderFactoryBean streamsBuilderFactoryBeanFromContext = applicationContext.getBean(
"&stream-builder-" + beanNamePostPrefix, StreamsBuilderFactoryBean.class);
//At this point, the StreamsBuilderFactoryBean is created. If the users call, getObject()
//in the customizer, that should grant access to the StreamsBuilder.
if (customizer != null) {
customizer.configure(streamsBuilderFactoryBean);
}
return streamsBuilderFactoryBeanFromContext;
}
protected Serde<?> getValueSerde(String inboundName, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, ResolvableType resolvableType) {
if (bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding()) {
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
return this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), kafkaStreamsConsumerProperties, resolvableType);
}
else {
return Serdes.ByteArray();
}
}
protected KStream<?, ?> getKStream(String inboundName, BindingProperties bindingProperties, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset, boolean firstBuild) {
if (firstBuild) {
addStateStoreBeans(streamsBuilder);
}
KStream<?, ?> stream;
if (this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName).isDestinationIsPattern()) {
final Pattern pattern = Pattern.compile(this.bindingServiceProperties.getBindingDestination(inboundName));
stream = streamsBuilder.stream(pattern);
}
else {
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
this.bindingServiceProperties.getBindingDestination(inboundName));
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
consumed);
}
final boolean nativeDecoding = this.bindingServiceProperties
.getConsumerProperties(inboundName).isUseNativeDecoding();
if (nativeDecoding) {
LOG.info("Native decoding is enabled for " + inboundName
+ ". Inbound deserialization done at the broker.");
}
else {
LOG.info("Native decoding is disabled for " + inboundName
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
return getkStream(bindingProperties, stream, nativeDecoding);
}
private KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
if (!nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType)) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
}
return stream;
}
@SuppressWarnings("rawtypes")
private void addStateStoreBeans(StreamsBuilder streamsBuilder) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
consumed, getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties) {
final Consumed<K, V> consumed = getConsumed(kafkaStreamsConsumerProperties, k, v, autoOffsetReset);
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
consumed,
getMaterialized(storeName, k, v));
}
private GlobalKTable<?, ?> getGlobalKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.globalTable(bindingDestination,
consumed);
}
private KTable<?, ?> getKTable(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
: streamsBuilder.table(bindingDestination,
consumed);
}
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
Serde<K> keySerde, Serde<V> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
TimestampExtractor timestampExtractor = null;
if (!StringUtils.isEmpty(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName())) {
timestampExtractor = applicationContext.getBean(kafkaStreamsConsumerProperties.getTimestampExtractorBeanName(),
TimestampExtractor.class);
}
final Consumed<K, V> consumed = Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset);
if (timestampExtractor != null) {
consumed.withTimestampExtractor(timestampExtractor);
}
return consumed;
}
}

View File

@@ -0,0 +1,43 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
/**
* Enumeration for various {@link org.apache.kafka.streams.errors.DeserializationExceptionHandler} types.
*
* @author Soby Chacko
* @since 3.0.0
*/
public enum DeserializationExceptionHandler {
/**
* Deserialization error handler with log and continue.
* See {@link org.apache.kafka.streams.errors.LogAndContinueExceptionHandler}
*/
logAndContinue,
/**
* Deserialization error handler with log and fail.
* See {@link org.apache.kafka.streams.errors.LogAndFailExceptionHandler}
*/
logAndFail,
/**
* Deserialization error handler with DLQ send.
* See {@link org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler}
*/
sendToDlq
}

View File

@@ -0,0 +1,72 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.springframework.boot.context.properties.ConfigurationPropertiesBindHandlerAdvisor;
import org.springframework.boot.context.properties.bind.AbstractBindHandler;
import org.springframework.boot.context.properties.bind.BindContext;
import org.springframework.boot.context.properties.bind.BindHandler;
import org.springframework.boot.context.properties.bind.BindResult;
import org.springframework.boot.context.properties.bind.Bindable;
import org.springframework.boot.context.properties.source.ConfigurationPropertyName;
/**
* {@link ConfigurationPropertiesBindHandlerAdvisor} to detect nativeEncoding/Decoding settings
* provided by the application explicitly.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class EncodingDecodingBindAdviceHandler implements ConfigurationPropertiesBindHandlerAdvisor {
private boolean encodingSettingProvided;
private boolean decodingSettingProvided;
public boolean isDecodingSettingProvided() {
return decodingSettingProvided;
}
public boolean isEncodingSettingProvided() {
return this.encodingSettingProvided;
}
@Override
public BindHandler apply(BindHandler bindHandler) {
BindHandler handler = new AbstractBindHandler(bindHandler) {
@Override
public <T> Bindable<T> onStart(ConfigurationPropertyName name,
Bindable<T> target, BindContext context) {
final String configName = name.toString();
if (configName.contains("use") && configName.contains("native") &&
(configName.contains("encoding") || configName.contains("decoding"))) {
BindResult<T> result = context.getBinder().bind(name, target);
if (result.isBound()) {
if (configName.contains("encoding")) {
EncodingDecodingBindAdviceHandler.this.encodingSettingProvided = true;
}
else {
EncodingDecodingBindAdviceHandler.this.decodingSettingProvided = true;
}
return target.withExistingValue(result.get());
}
}
return bindHandler.onStart(name, target, context);
}
};
return handler;
}
}

View File

@@ -0,0 +1,119 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
import org.springframework.cloud.stream.binder.DefaultBinding;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedProducerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.util.StringUtils;
/**
* An {@link AbstractBinder} implementation for {@link GlobalKTable}.
* <p>
* Provides only consumer binding for the bound {@link GlobalKTable}. Output bindings are
* not allowed on this binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBinder extends
// @checkstyle:off
AbstractBinder<GlobalKTable<Object, Object>, ExtendedConsumerProperties<KafkaStreamsConsumerProperties>, ExtendedProducerProperties<KafkaStreamsProducerProperties>>
implements
ExtendedPropertiesBinder<GlobalKTable<Object, Object>, KafkaStreamsConsumerProperties, KafkaStreamsProducerProperties> {
// @checkstyle:on
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
private final KafkaTopicProvisioner kafkaTopicProvisioner;
// @checkstyle:off
private KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties = new KafkaStreamsExtendedBindingProperties();
// @checkstyle:on
public GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner) {
this.binderConfigurationProperties = binderConfigurationProperties;
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
}
@Override
@SuppressWarnings("unchecked")
protected Binding<GlobalKTable<Object, Object>> doBindConsumer(String name,
String group, GlobalKTable<Object, Object> inputTarget,
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
if (!StringUtils.hasText(group)) {
group = properties.getExtension().getApplicationId();
}
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
getApplicationContext(), this.kafkaTopicProvisioner,
this.binderConfigurationProperties, properties);
return new DefaultBinding<>(name, group, inputTarget, null);
}
@Override
protected Binding<GlobalKTable<Object, Object>> doBindProducer(String name,
GlobalKTable<Object, Object> outboundBindTarget,
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
throw new UnsupportedOperationException(
"No producer level binding is allowed for GlobalKTable");
}
@Override
public KafkaStreamsConsumerProperties getExtendedConsumerProperties(
String channelName) {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(channelName);
}
@Override
public KafkaStreamsProducerProperties getExtendedProducerProperties(
String channelName) {
throw new UnsupportedOperationException(
"No producer binding is allowed and therefore no properties");
}
@Override
public String getDefaultsPrefix() {
return this.kafkaStreamsExtendedBindingProperties.getDefaultsPrefix();
}
@Override
public Class<? extends BinderSpecificPropertiesProvider> getExtendedPropertiesEntryClass() {
return this.kafkaStreamsExtendedBindingProperties
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -0,0 +1,82 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* Configuration for GlobalKTable binder.
*
* @author Soby Chacko
* @since 2.1.0
*/
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class,
MultiBinderPropertiesConfiguration.class})
public class GlobalKTableBinderConfiguration {
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@Bean
public GlobalKTableBinder GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("streamConfigGlobalProperties") Map<String, Object> streamConfigGlobalProperties) {
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;
}
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return beanFactory -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
};
}
}

View File

@@ -0,0 +1,129 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
/**
* {@link org.springframework.cloud.stream.binding.BindingTargetFactory} for
* {@link GlobalKTable}
*
* Input bindings are only created as output bindings on GlobalKTable are not allowed.
*
* @author Soby Chacko
* @since 2.1.0
*/
public class GlobalKTableBoundElementFactory
extends AbstractBindingTargetFactory<GlobalKTable> {
private final BindingServiceProperties bindingServiceProperties;
private final EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler;
GlobalKTableBoundElementFactory(BindingServiceProperties bindingServiceProperties,
EncodingDecodingBindAdviceHandler encodingDecodingBindAdviceHandler) {
super(GlobalKTable.class);
this.bindingServiceProperties = bindingServiceProperties;
this.encodingDecodingBindAdviceHandler = encodingDecodingBindAdviceHandler;
}
@Override
public GlobalKTable createInput(String name) {
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
else {
if (!encodingDecodingBindAdviceHandler.isDecodingSettingProvided()) {
consumerProperties.setUseNativeDecoding(true);
}
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler wrapper = new GlobalKTableBoundElementFactory.GlobalKTableWrapperHandler();
// @checkstyle:on
ProxyFactory proxyFactory = new ProxyFactory(
GlobalKTableBoundElementFactory.GlobalKTableWrapper.class,
GlobalKTable.class);
proxyFactory.addAdvice(wrapper);
return (GlobalKTable) proxyFactory.getProxy();
}
@Override
public GlobalKTable createOutput(String name) {
throw new UnsupportedOperationException(
"Outbound operations are not allowed on target type GlobalKTable");
}
/**
* Wrapper for GlobalKTable proxy.
*/
public interface GlobalKTableWrapper {
void wrap(GlobalKTable<Object, Object> delegate);
}
private static class GlobalKTableWrapperHandler implements
GlobalKTableBoundElementFactory.GlobalKTableWrapper, MethodInterceptor {
private GlobalKTable<Object, Object> delegate;
public void wrap(GlobalKTable<Object, Object> delegate) {
Assert.notNull(delegate, "delegate cannot be null");
if (this.delegate == null) {
this.delegate = delegate;
}
}
@Override
public Object invoke(MethodInvocation methodInvocation) throws Throwable {
if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTable.class)) {
Assert.notNull(this.delegate,
"Trying to prepareConsumerBinding " + methodInvocation.getMethod()
+ " but no delegate has been set.");
return methodInvocation.getMethod().invoke(this.delegate,
methodInvocation.getArguments());
}
else if (methodInvocation.getMethod().getDeclaringClass()
.equals(GlobalKTableBoundElementFactory.GlobalKTableWrapper.class)) {
return methodInvocation.getMethod().invoke(this,
methodInvocation.getArguments());
}
else {
throw new IllegalStateException(
"Only GlobalKTable method invocations are permitted");
}
}
}
}

View File

@@ -0,0 +1,181 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.HostInfo;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.StreamsMetadata;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.retry.RetryPolicy;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.SimpleRetryPolicy;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.util.StringUtils;
/**
* Services pertinent to the interactive query capabilities of Kafka Streams. This class
* provides services such as querying for a particular store, which instance is hosting a
* particular store etc. This is part of the public API of the kafka streams binder and
* the users can inject this service in their applications to make use of it.
*
* @author Soby Chacko
* @author Renwei Han
* @author Serhii Siryi
* @since 2.1.0
*/
public class InteractiveQueryService {
private static final Log LOG = LogFactory.getLog(InteractiveQueryService.class);
private final KafkaStreamsRegistry kafkaStreamsRegistry;
private final KafkaStreamsBinderConfigurationProperties binderConfigurationProperties;
/**
* Constructor for InteractiveQueryService.
* @param kafkaStreamsRegistry holding {@link KafkaStreamsRegistry}
* @param binderConfigurationProperties kafka Streams binder configuration properties
*/
public InteractiveQueryService(KafkaStreamsRegistry kafkaStreamsRegistry,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
this.binderConfigurationProperties = binderConfigurationProperties;
}
/**
* Retrieve and return a queryable store by name created in the application.
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
*/
public <T> T getQueryableStore(String storeName, QueryableStoreType<T> storeType) {
RetryTemplate retryTemplate = new RetryTemplate();
KafkaStreamsBinderConfigurationProperties.StateStoreRetry stateStoreRetry = this.binderConfigurationProperties.getStateStoreRetry();
RetryPolicy retryPolicy = new SimpleRetryPolicy(stateStoreRetry.getMaxAttempts());
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(stateStoreRetry.getBackoffPeriod());
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
return retryTemplate.execute(context -> {
T store = null;
final Set<KafkaStreams> kafkaStreams = InteractiveQueryService.this.kafkaStreamsRegistry.getKafkaStreams();
final Iterator<KafkaStreams> iterator = kafkaStreams.iterator();
Throwable throwable = null;
while (iterator.hasNext()) {
try {
store = iterator.next().store(storeName, storeType);
}
catch (InvalidStateStoreException e) {
// pass through..
throwable = e;
}
}
if (store != null) {
return store;
}
throw new IllegalStateException("Error when retrieving state store: j " + storeName, throwable);
});
}
/**
* Gets the current {@link HostInfo} that the calling kafka streams application is
* running on.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property when calling this method. If this is not available, then
* null is returned.
* @return the current {@link HostInfo}
*/
public HostInfo getCurrentHostInfo() {
Map<String, String> configuration = this.binderConfigurationProperties
.getConfiguration();
if (configuration.containsKey("application.server")) {
String applicationServer = configuration.get("application.server");
String[] splits = StringUtils.split(applicationServer, ":");
return new HostInfo(splits[0], Integer.valueOf(splits[1]));
}
return null;
}
/**
* Gets the {@link HostInfo} where the provided store and key are hosted on. This may
* not be the current host that is running the application. Kafka Streams will look
* through all the consumer instances under the same application id and retrieves the
* proper host.
*
* Note that the end user applications must provide `applicaiton.server` as a
* configuration property for all the application instances when calling this method.
* If this is not available, then null maybe returned.
* @param <K> generic type for key
* @param store store name
* @param key key to look for
* @param serializer {@link Serializer} for the key
* @return the {@link HostInfo} where the key for the provided store is hosted
* currently
*/
public <K> HostInfo getHostInfo(String store, K key, Serializer<K> serializer) {
StreamsMetadata streamsMetadata = this.kafkaStreamsRegistry.getKafkaStreams()
.stream()
.map((k) -> Optional.ofNullable(k.metadataForKey(store, key, serializer)))
.filter(Optional::isPresent).map(Optional::get).findFirst().orElse(null);
return streamsMetadata != null ? streamsMetadata.hostInfo() : null;
}
/**
* Gets the list of {@link HostInfo} where the provided store is hosted on.
* It also can include current host info.
* Kafka Streams will look through all the consumer instances under the same application id
* and retrieves all hosts info.
*
* Note that the end-user applications must provide `application.server` as a configuration property
* for all the application instances when calling this method. If this is not available, then an empty list will be returned.
*
* @param store store name
* @return the list of {@link HostInfo} where provided store is hosted on
*/
public List<HostInfo> getAllHostsInfo(String store) {
return kafkaStreamsRegistry.getKafkaStreams()
.stream()
.flatMap(k -> k.allMetadataForStore(store).stream())
.filter(Objects::nonNull)
.map(StreamsMetadata::hostInfo)
.collect(Collectors.toList());
}
}

Some files were not shown because too many files have changed in this diff Show More